Google has made a tonne of announcements at Google I/O, ranging from Android Q advancements to unveiling of new hardware. Two particularly interesting announcements that will play a huge role in enhancing accessibility associated with mobile technology were Live Relay and Project Euphonia. The Live Relay feature is targeted at those with hearing difficulties, and will allow them to have a regular phone call by live transcribing the audio from the other end, and then converting the text-based response into audio signals. Project Euphonia, on the other hand, is for people with speech difficulties resulting from ALS or other degenerative ailments.
Starting with Live Relay, it is a research project which aims to make it easier for people affected by hearing difficulties to have a regular conversation over a phone call. Live Relay first transcribes whatever the person on the other end has said by using speech recognition algorithms. Once the voice transcript appears on the screen, users can send a response presented to them using Smart Reply and Smart Compose, or even type a custom reply, which is then converted into audio and relayed to the person on the other end of the call.
The whole premise is to let users with hearing issues have a regular phone call without their difficulties coming into the way. It can also be helpful for regular folks who don't want to talk on the phone at a particular time or place, but still want to send a response. Live Relay's entire mechanism is performed locally without any data requirement and can also be used to conduct a phone call even if the person on the other side is using a landline phone.
Google is working to make Live Relay even more productive by adding real-time translation to the mix. But the promising technology is still in the research phase and there is no word from Google as to when it will be made widely available.
Project Euphonia is the second accessibility-focused initiative announced by Google that is being developed for folks with difficulty speaking, possibly caused by neuro-degenerative ailments like ALS or Parkinson's. Since voice-assisted services like Google Assistant are trained on models with regular voice samples, they may not be particularly useful for folks with speech impairment.
Google, in collaboration with ALS Therapy Development Institute (ALS TDI) and ALS Residence Initiative (ALSRI), is working to overcome that issue by training its AI algorithm using speech samples from folks with speech issues, so that their voice can be easily recognised and processed. Aside from improving the speech recognition capabilities, Google is also working on training the AI algorithm to detect sound and gestures so that it can convert them into executable commands for devices like the Google Home smart speaker.
Google is working to collect voice samples from people with a wider range of speech impairment related issues to improve its AI algorithm, and is also accepting voice samples from volunteers affected by a wider range of such speech issues to broaden the research.
At the company's annual developer conference, Google also talked about the Live Transcribe and Sound Amplifier apps, both of which were announced earlier this year, and are aimed at folks with hearing difficulties. As for their functionality, the Live Transcribe app serves as a transcription tool for people with severe hearing difficulties, while the Sound Amplifier app enhances the volume and clarity of sound for people suffering from partial hearing difficulties.
For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.