Advertisement

Google trains its AI to accommodate speech impairments

This could help people with conditions like ALS communicate via voice assistants.

For most users, voice assistants are helpful tools. But for the millions of people with speech impairments caused by neurological conditions, voice assistants can be yet another frustrating challenge. Google wants to change that. At its I/O developer conference today, Google revealed that it's training AI to better understand diverse speech patterns, such as impaired speech caused by brain injury or conditions like ALS.

Through Project Euphonia, Google partnered with ALS Therapy Development Institute (ALS TDI) and ALS Residence Initiative (ALSRI). The idea was that if friends and family of people with ALS can understand their loved ones, then Google could train computers to do the same. It simply needed to present its AI with enough examples of impaired speech patterns.

So, Google set out to record thousands of voice samples. One volunteer, Dimitri Kanevsky, a speech researcher at Google who learned English after becoming deaf as a child in Russia, recorded 15,000 phrases. Those were turned into spectrograms -- visual representations of sound -- and used to train the AI to understand Kanevsky.

This is still a work in progress, and for now, Google is working to bring it to people who speak English and have impairments typically associated with ALS. It's calling for volunteers, who can fill out a short form and record a set of phrases. Google also wants its AI to translate sounds and gestures into actions, such as speaking commands to Google Home or sending text messages. Eventually, it hopes to develop AI that can understand anyone, no matter how they communicate.