Advertisement

Facebook apologizes for its moderation 'mistakes'

It's still struggling to manage sensitive material.

With over 2 billion users and counting, policing Facebook is an increasingly difficult task. The unenviable job rests on the shoulders of a 7,500-strong team of content moderators (alongside the site's algorithms), who scour through tons of unsightly posts, ranging from violent terrorist material to images of child abuse. Unsurprisingly, they don't always get it right (in part due to Facebook's ambiguous guidelines). And so, yet another report of hateful material slipping through the site's cracks has emerged, this time from ProPublica.

The non-profit sent Facebook a sample of 49 items containing hate speech, and a few with legitimate expression (from its pool of 900 crowdsourced posts) and the social network admitted its reviewers made mistakes in 22 of the cases. In six cases, Facebook blamed users for not flagging the posts correctly, and in two additional incidents it said it didn't have enough info to respond. The company defended 19 of its decisions. The posts included sexist, racist, and anti-muslim rhetoric.

"We're sorry for the mistakes we have made," said Facebook VP Justin Osofsky in a statement. "We must do better." The exec revealed that the social network will up its safety and security team to 20,000 people next year in an effort to better implement its community standards. He added that Facebook deletes around 66,000 posts reported as hate speech each week.

On top of its fight against misinformation, Facebook has also been adding new tools to combat sensitive material. In April, it introduced a reporting mechanism for revenge porn, and earlier this month it launched features to help you block or ignore harassers.