Twitter bans deepfakes that are 'likely to cause harm'

It may label manipulated content and warn users before they retweet it.

Sponsored Links


Twitter just released its new rules for handling synthetic and manipulated media. The company says you can no longer "deceptively share" deepfakes that are "likely to cause harm," and it may label Tweets containing deepfakes to help people understand what's real and what's been altered.

When determining if the media has been deceptively altered, Twitter will consider factors like whether a real person has been fabricated. It may flag content if visual or auditory info (like dubbing) has been added or removed. It will also judge the context and whether the deepfake is likely to impact public safety or cause serious harm.

Beginning March 5th, Twitter may label Tweets with "deceptively altered or fabricated content." It may also show a warning to people before they retweet or like the manipulated media, reduce the visibility of the tweet, prevent it from being recommended or provide additional explanations through a landing page.

Turn on browser notifications to receive breaking news alerts from Engadget
You can disable notifications at any time in your settings menu.
Not now

These changes are the result of an effort to combat deepfakes. Twitter promised these rules late last year, and it drafted guidelines based on user feedback. The platform has already banned porn deepfakes, and as the 2020 election nears, it's likely Twitter wants to prevent political deepfake scandals and misinformation campaigns.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.