Researchers at the Cornell University have developed a silent-speech recognition (SSR) device that can identify silent commands using images of skin deformation in the neck and face.
The device, called SpeeChin, records movements in the neck and face captured by a neck-mounted infared camera, capable of learning a person’s speech patterns both with and without a vocalised sound, translating the movements into device commands.
During tests, the device was found to average an accuracy of 90.5% with English silent voice commands and an average accuracy of 91/6% accuracy in Mandarin. The device offers users in busy environments, such as shared office spaces, to operate devices hands-free without disruptive sound.
Ruidong Zhang, doctoral student in the field of information science, Cornell University, explained: “We feel a necklace is a form factor that people are used to, as opposed to ear-mounted devices, which may not be as comfortable. As far as silent speech, people may think, ‘I already have a speech recognition device on my phone.’ But you need to vocalise sound for those, and that may not always be socially appropriate, or the person may not be able to vocalize speech.
“We’re introducing an entirely new form factor, new hardware, into this field,”