The non-profit sent Facebook a sample of 49 items containing hate speech, and a few with legitimate expression (from its pool of 900 crowdsourced posts) and the social network admitted its reviewers made mistakes in 22 of the cases. In six cases, Facebook blamed users for not flagging the posts correctly, and in two additional incidents it said it didn't have enough info to respond. The company defended 19 of its decisions. The posts included sexist, racist, and anti-muslim rhetoric.
"We're sorry for the mistakes we have made," said Facebook VP Justin Osofsky in a statement. "We must do better." The exec revealed that the social network will up its safety and security team to 20,000 people next year in an effort to better implement its community standards. He added that Facebook deletes around 66,000 posts reported as hate speech each week.
On top of its fight against misinformation, Facebook has also been adding new tools to combat sensitive material. In April, it introduced a reporting mechanism for revenge porn, and earlier this month it launched features to help you block or ignore harassers.