Advertisement

Fake porn is the new fake news, and the internet isn’t ready

AI-generated porn is just the beginning.

Getty Images

Ever since Facebook finally admitted to having a fake news problem, it's been trying to fix it. It hired thousands of people to help block fake ads, pledged to work with third-party fact-checking organizations and is busy building algorithms to detect fake news. But even as it attempts to fight back against fraudulent ads and made-up facts, another potential fake news threat looms on the horizon: artificially generated fake video.

Motherboard recently uncovered a disturbing new trend on Reddit, in which users create AI-generated pornographic clips by swapping other people's faces onto porn stars. The outlet first reported on the phenomenon a month ago when Reddit user "deepfakes" posted a video of Gal Gadot's face swapped onto a porn star's body (he's since created more fake porn with other celebrities). According to Motherboard, the video was created with machine-learning algorithms, easily accessible open-source libraries and images from Google, stock photos and YouTube videos.

But while that was just the one user faking pornographic videos, now there's an app helping tens of thousands of others do it too, not just with celebrities, but with everyday people. Motherboard reports that the app can be used by people without a technical background or any programming experience. The app creator, who goes by the name of "deepfakeapp," told Motherboard he eventually wants to streamline the UI so that users can "select a video on their computer, download a neural network correlated to a certain face from a publicly available library, and swap the video with a different face with the press of one button." Though the resulting fake videos aren't perfect, some do look eerily realistic.

Jessica Alba's face swapped with a porn actress using the "deepfakes" app

Needless to say, this has frightening consequences. Not only does this open the door for a horrifying new kind of revenge porn, where a vengeful ex could slap your face on an X-rated video, it also opens a Pandora's box of fears where nothing on the internet can ever be trusted. After all, false news claims are already widely read and shared on social media platforms like Facebook and Twitter. It doesn't take much imagination to think of a world where foreign nation states can use AI to create and disseminate videos of politicians or public figures saying things they've never said. It would be fake news taken to a dystopian extreme.

At a recent AI roundtable in San Francisco, a panel of experts from various Bay Area tech companies was asked how they could safeguard against such malicious practices. Li Fan, Pinterest's head of engineering, said that in the specific case of face swapping celebrities into adult movies, blocking fraudulent videos wouldn't be too difficult. That's because you could run a facial matching algorithm to filter out well-defined databases of celebrity faces. But that, unfortunately, probably wouldn't be the end of it.

"Where it gets really hard is when you have a mouse and cat game," she said. "There will always be hackers and fraudsters who will do more. It's really a game where we have to keep catching up." Andrew Bolwell, the head of HP Tech Ventures, suggested that the best defense for an AI hacker is an AI defender. "[Still] as the defense gets better, the attack gets better ... it's really hard to imagine an answer to that, that would cover all of the use cases."

Perhaps the most intriguing answer came from Joaquin Quinonero Candela, Facebook's director of applied machine learning. "I am confident that we can build algorithms to detect counterfeit content," he said, echoing the sentiments of the rest of the panel, and also agreed with the "AI is the best defense" comment. But he also agreed that the rising ease and efficiency of imitation algorithms presents a pretty big, ongoing problem. A problem which, right now, doesn't seem to be going away.

Just a few months ago, a research team at the University of Washington was able to create an artificial video of President Obama mouthing the words to a recording of himself. The team used 14 hours of his weekly address to train a neural network to sync his lips to the audio. The system even adjusted for head and jaw movement to make the whole thing look more realistic. While this sort of video magic would've previously required hours upon hours of painstaking CGI work, the team simply let the neural network run itself for a few hours and, voilà, a fake video is born.

And that's not all. A few years ago, Stanford University developed a software called Face2Face, which can capture someone's facial expressions on a webcam and then transfer them to the person in a video. A team at the University of Alabama at Birmingham is also working on creating synthesized speech culled from audio taken from YouTube videos and radio shows.

Alexander Reben, the Bay Area artist and engineer behind Deeply Artificial Trees, a Bob Ross-inspired art piece powered by machine learning, said the technology behind faking videos will only get more advanced over time. "Not only is the technology here to fool the average person," he said via email, "home-computing hardware is now cheap enough for this sort of thing to be done by anyone.

"One could train another system with fake and real images to try to differentiate them," he said about using AI as a defense. "However, this just might lead to an arms race of the faking systems trying to game the detecting systems."

There are positive use cases for such technology, Reben said, such as using it in cinema or bringing a loved one "back to life" as a digital character. "But probably the bigger implication will be that we can trust our eyes and ears even less."

On the other end of the spectrum are machine-learning experts like Candela, who are busy using artificial intelligence to help weed out fake news in the first place. But things are not perfect. Dan Zigmond, Facebook's director of analytics for News Feed, said in a YouTube video last November that despite all the company's best efforts to identify and downrank fake news, it still takes three whole days for fact checkers to verify a story. Three days is more than enough time for a story to go viral and for misinformation to spread (Zigmond did acknowledge this in the video, and said Facebook is working on improving this going forward).

A couple of weeks ago, a clickbait story from YourNewsWire claimed that the US Centers for Disease Control had said that the flu shot was the culprit behind the recent spate of flu-related deaths. It turns out, however, that the story was a complete fabrication. In fact, the CDC has been strongly encouraging people to get the flu shot due to how severe the flu season is this year. Snopes debunked the story almost immediately, but it was too late. Before long, the story generated more than 176,000 engagements, despite the fact that Facebook had already unverified one of the publication's two pages. People were still sharing the story.

"Facebook today cannot exist without AI," said Candela in an interview with Wired magazine last February. "Every time you use Facebook or Instagram or Messenger, you may not realize it, but your experiences are being powered by AI." At the same time, however, he also admits that AI is still young. "The challenge is that AI is really in its infancy still," he said in the same interview. "We're only getting started."

It's true that AI is helpful in a lot of ways. Computer vision helps the visually disabled navigate the web with descriptive captions, it can assist you in sorting through your favorite vacation photos, and automated language translation helps people from different countries communicate with one another. And, yes, Facebook uses its special algorithm to surface the connections and stories it thinks are most interesting to you, which is useful for keeping up-to-date with your friends and family.

But AI can be harmful in the wrong hands, and even though AI can, in turn, be used to detect bad actors, it's clearly insufficient against human fallibility. And seeing as the technology isn't even successful in blocking misleading news stories, it seems unlikely that it'll be able to stop fake video from propagating before it's too late. Companies like Facebook need to be mindful that one of its core responsibilities is to act as community safeguards. And we can't afford to be collateral damage in a cat and mouse game of AI versus AI.