Facebook labeled 180 million posts for election misinformation

It removed 265,000 posts for voter interference.

Facebook just offered its first look at the scale of its fight against election misinformation. In the lead-up to the 2020 presidential election, Facebook slapped warning labels on more than 180 million posts that shared misinformation. And it removed 265,000 pieces of content for breaking the company’s rules against voter interference.

Facebook’s VP of Integrity Guy Rosen shared the stats during a call with reporters Thursday. He noted that around 95 percent of Facebook users don’t click-through the warning labels to view posts that are labeled for misinformation. Importantly, these stats cover the period between March and Election Day, so it doesn’t offer any insight into the company’s post-election efforts to slow down viral misinformation.

Rosen also shared an update on the company’s fight against COVID-19 misinformation. He said that between March and October, Facebook has removed more than 12 million pieces of content for sharing dangerous misinformation about the coronavirus. Under Facebook’s rules, the company removes misinformation it says can lead to imminent harm, such as fake cures. Misinformation it considers less dangerous is sent to the company’s fact checkers for debunking. According to Rosen, Facebook has now labeled 167 million posts for coronavirus misinformation.

The numbers were provided alongside the social network’s latest transparency report, which details Facebook’s content takedowns over a variety of policies. The report revealed new metrics around the company’s work to combat hate speech. Facebook said it removed just over 22 million pieces of content for hate speech during the third quarter of 2020. That number is roughly in line with the 22.5 million takedowns from the previous quarter. But for the first time, the company is also giving more context to these numbers.

“For the first time, we’re including the prevalence of hate speech on Facebook globally,” the company wrote in a blog post. “In Q3 2020, hate speech prevalence was 0.10% - 0.11% or 10 to 11 views of hate speech for every 10,000 views of content.”

Though the company didn’t mention this summer’s ad boycott, which was organized by civil rights leaders in response to the company’s hate speech policies, the new “prevalence” metric seemed designed to push back on the narrative that hate speech is rampant on the platform. (Importantly, boycott organizers took issue not just with the amount of hate speech on Facebook, but the company’s policies for handling it.)

Facebook also pointed to advancements in its AI technology, which the company has credited with increasing its ability to proactively detect hate speech before it’s reported. Importantly, though, not everyone is convinced. Facebook’s latest numbers come one day after a group of moderators penned an open letter to company executives alleging that the company’s AI-based tools were “years away” from being truly effective.