Innovators from IIT, AIIMS Jodhpur develop ‘Talking Gloves’ for differently-abled people
   Date :30-Nov-2021

Talking Gloves_1 &nb
 
 
NEW DELHI :
 
RESEARCHERS from the Indian Institute of Technology (IIT), Jodhpur and the All India Institute of Medical Science (AIIMS), Jodhpur have developed low-cost “talking gloves” for people with speech disability. The patented device costs less than Rs 5,000 and it uses principles of Artificial Intelligence (AI) and Machine Learning (ML) to automatically generate speech that will be language-independent and facilitate communication between mute people and non-disabled people. The device can help individuals convert hand gestures into text or pre-recorded voices, making a differently-abled person independent and communicating the message effectively.
 
“The language-independent speech generation device will bring people back to the mainstream in today’s global era without any language barrier. Users of the device only need to learn once and they will be able to verbally communicate in any language with their knowledge,” said Sumit Kalra, Assistant Professor, Department of Computer Science and Engineering, IIT Jodhpur. In the device, electrical signals are generated by a first set of sensors, wearable on a combination of a thumb, fingers, and a wrist of a first hand of a user. These electrical signals are produced by the combination of fingers, thumb, hand and wrist movements. Similarly, electrical signals are also generated by the second set of sensors on the other hand. Additionally, the device can be customised to produce a voice similar to the original voice of the patient, which makes it appear more natural while using the device, Kalra said.
 
“These electrical signals are received at a signal processing unit. The magnitude of the received electrical signals is compared with a plurality of pre-defined combinations of magnitudes stored in a memory by using the signal processing unit. By using AI and ML algorithms, these combinations of signals are translated into phonetics corresponding to at least one of consonant and a vowel. “In an example implementation, the consonant and the vowel can be from Hindi language phonetics. A phonetic is assigned to the received electrical signals based on the comparison,” Kalra added. An audio signal is generated by an audio transmitter corresponding to the assigned phonetic and based on trained data associated with vocal characteristics stored in a machine learning unit. The generation of audio signals according to the phonetics having combination of vowels and consonants leads to the generation of speech and enables mute people to audibly communicate with others. The speech synthesis technique of the present subject matter uses phonetics, and therefore the speech generation is independent of any language. “The team is further working to enhance features such as durability, weight, responsiveness, and ease-of-use, of the device. The developed product will be commercialised through a startup incubated by IIT Jodhpur,” he said.