To enable the robot's speaking abilities, engineers at Japan's Kagawa University used an air pump, artificial vocal chords, a resonance tube, a nasal cavity, and a microphone attached to a sound analyzer as substitutes for human vocal organs. The robot not only talks, but it uses a learning algorithm to mimic the sounds of human speech. By inputting the voices of both hearing-impaired and non-hearing-impaired people into the microphone, researchers were able to plot the differences in sound on a map. During speech training, the robot "listens" to the subjects talk while comparing their pronunciation to that of subjects who are not hearing-impaired. The robot then generates a personalized visualization that allows subjects to adjust their pronunciation according to the target points on the speech map.
Admittedly that's pretty neat, but I suspect there's a much more devious robotic plan involved. You know those scary-ass fish with a million teeth that live way down in the ocean and have that little light they dangle around to catch fish? Well this is like the same thing, except for human peckers.
Hit it for the video if you like torturing yourself.
Thanks to Eloc, scott, Ste, celith, Zach, Gordon, Spikey DaPikey, Drew, Pete, prestone, McBiteypants and Gavin, all of whom learned how to kiss by making out with Teddy Ruxpin dolls. LOLWUT?!