Meta's latest transparency report details bullying on Facebook and Instagram

The company has been trying to counter revelations from the Facebook Papers.

Sponsored Links

Karissa Bell
November 9, 2021 1:28 PM
In this article: news, gear, instagram, meta, hate speech, facebook
ANKARA, TURKEY - OCTOBER 29: In this photo illustration logo of Meta is displayed on a mobile phone screen and apps icons on a computer screen in Ankara, Turkey on October 29, 2021. (Photo by Hakan Nural/Anadolu Agency via Getty Images)
Anadolu Agency via Getty Images

Facebook has shared new statistics on the amount of bullying, hate speech and harassment on its platform. The new numbers, released with the company’s latest quarterly transparency reports, come as Meta faces increasing scrutiny over its ability to protect users and enforce its policies adequately around the world.

Its latest report marks the first time the company has shared "prevalence" metrics around bullying and harassment on its platform. "Prevalence" is a statistic Facebook uses to track violating content that slips through its detection systems. "It represents the amount of violating content that people actually view that actually shows up on someone's screen," the company's VP of Integrity Guy Rosen told reporters during a briefing.

According to the company, the prevalence of bullying content was between 0.14% -0.15% on Facebook and between 0.05%-0.06% on Instagram. “This means bullying and harassment content was seen between 14 and 15 times per every 10,000 views of content on Facebook and between 5 and 6 times per 10,000 views of content on Instagram,” the company explains in a statement. Instagram in particular has faced questions about its ability to deal with bullying and harassment. The company introduced several new anti-bullying measures earlier this year after several UK football players detailed their experience with racist abuse on the app. 

Turn on browser notifications to receive breaking news alerts from Engadget
You can disable notifications at any time in your settings menu.
Not now

Importantly, the company notes that this “prevalence” metric only accounts for content Facebook and Instagram removes without a user report. That means the statistic is only capturing a subset of all bullying content, since bullying and harassment is not always easy for an automated system to identify.

That distinction has been underscored by revelations in the Facebook Papers, a trove of documents made public by former employee turned whistleblower Frances Haugen. According to documents she shared, Facebook’s own researchers estimate that the company is only able to address around three to five percent of hate speech on its platform, meaning the vast majority goes undetected and is allowed to pollute users’ News Feeds.

Facebook has repeatedly pushed back on these claims, and has pointed to the “prevalence” stats it shares in its transparency reports. But as researchers have pointed out, the company’s own accounting of “prevalence” can mask the true amount of violating content on the platform. That’s because Facebook’s automated systems are not always reliable, especially at detecting content in languages other than English. The revelations have fueled allegations that Facebook puts profits ahead of user safety.

“We have absolutely no incentive, whether it's commercial or otherwise, to do anything other than make sure people have a positive experience,” Rosen said Tuesday. “I think it's also just not true that our algorithms are just optimized to squeeze out engagement. We're constantly refining how we do ranking in order to tackle these problems.”

In its latest report, Facebook reported that hate speech had declined for the fourth straight quarter, with prevalence declining from 0.05% last quarter to 0.03% this quarter. The company also reported prevalence of hate speech on Instagram for the first time, saying hate speech was at 0.02% or around 2 out of every 10,000 pieces of content viewed on its platform.

However, it’s worth noting that even the most optimistic take on these numbers — 0.03% and 0.02% for Facebook and Instagram, respectively — can still mean millions of people are encountering hate speech every day, given the vast number of users and pieces of content posted to the platforms each day. 

Separately, Facebook also said its researchers are working on "a relatively new area of AI research called 'few-shot' or 'zero-shot' learning," which would enable them to train AI models much more rapidly. Instead of relying on massive datasets to manually train models for, say identifying hate speech, it would enable models that can "learn to recognize something from just a small number of training examples, or even just a single example," the company wrote. Facebook didn't say how long it might take to put this research into action, but it suggests the company is still pursuing AI advancements to address major content issues. 

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.