Advertisement

Facebook deploys AI in its fight against hate speech and misinformation

The company is also launching a $100,000 Hate Meme Challenge.

Peter Dazeley via Getty Images

Even in the year 2020, it’s not very hard to be led astray on Facebook. Click a few misleading links and you can find yourself at the bottom of an ethnonationalist rabbit hole facing a flurry of hate speech and medical misinformation. But with the help of AI and machine learning systems, the social media platform is accelerating its efforts to keep this content from spreading.

It’s bad enough that we’re having to deal with the COVID-19 pandemic without being bombarded on Facebook with ads for sham cures and conspiracy theories passed off as the gospel truth. The company is already partnering with 60 fact checking organizations to fight this disinformation and has issued a temporary ban to halt the sale of PPE, hand sanitizers, and cleaning supplies on the platform since the start of the outbreak in March. The problem is that people can easily get around this ban by slightly altering the text or image in an ad and resubmitting it. The two images may appear nearly identical to people -- screenshots, for example -- but can trip up conventional computer vision systems because those machines are designed to look at individual pixels rather than the image as a whole. They miss the forest for the trees.

But that’s where SimSearchNet comes in. This convolutional neural net-based model is purpose built to identify nearly-identical images, which in turn helps to automate the enforcement of the checks made by human moderators. Once a human fact-checker flags an image as containing false claims about COVID-19, that information is fed back into SimSearchNet which seeks out near-duplicates so moderators can affix warning labels to those images as well. It essentially scales up the moderator’s reach, autonomously applying their decisions to the thousands (potentially millions) of digital doppelgangers that those false images spawn.

“What we want to be able to do is detect those those things as being identical, because they are — to a person — the same thing,” Mike Schroepfer, Facebook CTO, explained on a conference call Tuesday. “But we have to do this very high accuracy, because we don't want to take something that looks very similar, but is actually qualitatively different, and either put in misinformation overlay or or block it as appropriate. And so our previous systems were very accurate, but they were very fragile and brittle to even very small changes if you change a small number of pixels.”

In a blog post, the company claims to have labelled about 50 million posts related to COVID-19 during the month of April and removed “more than 2.5 million pieces of content for the sale of masks, hand sanitizers, surface disinfecting wipes and COVID-19 test kits.”

Of course, Facebook has had troll troubles for far longer than COVID-19 has had us sheltering in place. The company has long sought to rein in the prevalence of hate speech spread on its site. According to a separate blog posted on Tuesday, company reps said that per “Community Standards Enforcement Report released today, AI now proactively detects 88.8 percent of the hate speech content we remove, up from 80.2 percent the previous quarter. In the first quarter of 2020, we took action on 9.6 million pieces of content for violating our hate speech policies --  an increase of 3.9 million.”

Detecting hate speech is no easy feat. There can be layers upon layers of nuance involved -- it’s not just what is being said, but how it is being said, who it is being said to, and what sorts of accompanying content (whether that’s images, audio or video) are presented alongside it. Even more difficult is correctly identifying responses to hate speech, especially when those statements use much of the same language as the offending post. Even human moderators are routinely tripped up. In response, Facebook has spent the past few years improving its natural language processing capabilities, including XLM-R, which can translate text between roughly 100 spoken languages, and RoBERTa, a model that helps pretrain the likes of XLM-R for longer periods and using magnitudes more training data.

fa
facebook

The company is even tackling hate memes. As mentioned previously, combining two types of content -- i.e. text and an image -- can severely hamper an AI’s attempts to ID it as hate speech. So, Facebook AI went and created an entire database of multimodal examples -- more than 10,000 professionally-created memes using licensed Getty images -- with which to train tomorrow’s AI moderators.

“The memes were selected in such a way that strictly unimodal classifiers would struggle to classify them correctly,” the Facebook AI team wrote in a Tuesday blog post. “We also designed the data set specifically to overcome common challenges in AI research, such as the lack of examples to help machines learn to avoid false positives.”

And to help spur the development of those sorts of machine learning content moderation systems, Facebook is partnering with DrivenData to launch the Hateful Meme Challenge. If you can code an AI to automatically identify and label these multimodal hate speech, you could earn yourself a cool $100,000.