Advertisement

A new AI voice tool is already being abused to make deepfake celebrity audio clips

This is why we can't have nice things.

A new AI voice tool is already being abused to make deepfake celebrity audio clips

A few days ago, speech AI startup ElevenLabs launched a beta version of its platform that gives users the power to create entirely new synthetic voices for text-to-speech audio or to clone somebody's voice. Well, it only took the internet a few days to start using the latter for vile purposes. The company has revealed on Twitter that it's seeing an "increasing number of voice cloning misuse cases" and that it's thinking of a way to address the problem by "implementing additional safeguards."

While ElevenLabs didn't elaborate on what it meant by "misuse cases," Motherboard found 4chan posts with clips featuring generated voices that sound like celebrities reading or saying something questionable. One clip, for instance, reportedly featured a voice that sounded like Emma Watson reading a part of Mein Kampf. Users also posted voice clips that feature homophobic, transphobic, violent and racist sentiments. It's not entirely clear if all the clips used ElevenLab's technology, but a post with a wide collection of the voice files on 4chan included a link to the startup's platform.

Perhaps this emergence of "deepfake" audio clips shouldn't come as a surprise, seeing as a few years ago, we'd seen a similar phenomenon take place. Advances in AI and machine learning had led to a rise in deepfake videos, specifically deepfake pornography, wherein existing pornographic materials are altered to use the faces of celebrities. And, yes, people used Emma Watson's face for some of those videos.

ElevenLabs is now gathering feedback on how to prevent users from abusing its technology. At the moment, its current ideas include adding more layers to its account verification to enable voice cloning, such as requiring users to enter payment info or an ID. It's also considering having users verify copyright ownership of the voice they want to clone, such as getting them to submit a sample with prompted text. Finally, the company is thinking of dropping its Voice Lab tool altogether and having users submit voice cloning requests that it has to manually verify.