Advertisement

Meta's new multimodal translator uses a single model to speak 100 languages

We're getting tantalizingly close to Babelfish territory.

Meta's new multimodal translator uses a single model to speak 100 languages

Though it's not quite ready to usher in the Doolittle future we've all been waiting for, modern AI translation methods are proving more than sufficient in accurately transforming humanity's roughly 6,500 spoken and written communication systems between one another. The problem is that each of these models tends to only do one or two tasks really well — translate and convert text to speech, speech to text or between either of the two sets — so you end up having to smash a bunch of models on top of each other to create the generalized performance seen in the likes of Google Translate or Facebook's myriad language services.

That's a computationally intensive process, so Meta developed a single model that can do it all. SeamlessM4T is "a foundational multilingual and multitask model that seamlessly translates and transcribes across speech and text," Meta's blog from Tuesday reads. It can translate between any of nearly 100 languages for speech-to-text and text-to-text functions, speech-to-speech and text-to-speech supports those same languages as inputs and outputs them in any of 36 others tongues, including English.

In their blog post, Meta's research team notes that SeamlessM4T "significantly improve[s] performance for the low and mid-resource languages we support," while maintaining "strong performance on high-resource languages, such as English, Spanish, and German." Meta built SeamlessM4T from its existing PyTorch-based multitask UnitY model architecture, which already natively performs the various modal translations as well as automatic speech recognition. It utilizes the BERT 2.0 system for audio encoding, breaking down inputs into their component tokens for analysis, and a HiFi-GAN unit vocoder to generate spoken responses.

Meta has also curated a massive open-source speech-to-speech and speech-to-text parallel corpus, dubbed SeamlessAlign. The company mined "tens of billions of sentences" and "four million hours" of speech from publicly available repositories to "automatically align more than 443,000 hours of speech with texts, and create about 29,000 hours of speech-to-speech alignments," per the blog. When tested for robustness, SeamlessM4T reportedly outperformed its (current state-of-the-art) predecessor against background noises and speaker style variations by 37 percent and 48 percent, respectively.

As with most all of its previous machine translation efforts — whether that's Llama 2, Massively Multilingual Speech (MMS), Universal Speech Translator (UST), or the ambitious No Language Left Behind (NLLB) project — SeamlessM4T is being open-sourced. "we believe SeamlessM4T is an important breakthrough in the AI community’s quest toward creating universal multitask systems," the team wrote. "Keeping with our approach to open science, we are excited to share our model publicly to allow researchers and developers to build on this technology." If you're interested in working with SeamlessM4T for yourself, head over to GitHub to download the model, training data and documentation.