LipSync, which is built by YouTube for Chrome on desktop, will score your performance. It will then feed the video to Google’s AI — it doesn’t record any audio.
Google plans to use the video clips to teach its AI how human faces move when we speak. This could inform tools for people with ALS and speech impairments. Someday, AI might be able to guess what they are saying by observing their facial movements and then speak out loud on the person’s behalf.
Google already has several accessibility features -- from Android apps for the hard of hearing to accessible locations highlighted in Maps. This AI speech recognition capability could lead to useful additions.