Skip to main content

Sonar glasses that speak words just by reading lip movements

 



Some people are speechless but can make words by moving their lips. Similarly, people's voices are not heard even in a lot of noise. Now this situation can be solved with a revolutionary lens that uses sonar technology. Experimentally, this tank has been developed by the Smart Computing Speech Utter Face Laboratory (Sci-Fi Lab) at Cornell University. At the bottom of the frame of the glasses are two very small speakers, as well as two single but sensitive microphones. Speakers emit sound waves that are inaudible to the person speaking. But it hits the speaker's mouth again and returns to the mic. Now in the case of echo (echo), when the sound reaches the sensitive microphone, an algorithm inside the lens transmits it wirelessly. Sends to phone. Now here the software algorithm reads the spoken words from the movements of the ho-nuts. It currently understands 31 types of words and instructions with 95% accuracy and communicates its voice to the viewer. Interestingly, the software requires only a few minutes of training. All that has to be done is that the speech-impaired person has to come forward and say a few words. Similarly, with continuous training it can learn more words. The lens can last up to ten minutes on a single charge.

Comments