Advertisement

Facebook removed over 1.5 billion fake accounts in the last six months

The latest transparency report shows government requests for user data are increasing, as well.

It's been less than a day since the New York Times published a brutal report about the state of affairs at Facebook, including a deep look at the company's failure to properly identify and deal with fake news and Russian interference. The company already issued a lengthy response to the claims, and today Facebook is publishing its biannual transparency report, covering the first half of 2018. Facebook says the report covers "information about government requests for user data we've received; reports on where access to Facebook products and services was disrupted; the number of content restrictions based on local law; and reports of counterfeit, copyright, and trademark infringement."

As part of its transparency report, Facebook is also publishing its latest Community Standards Enforcement Report. In it, the company highlights its efforts in removing content that violates Facebook guidelines, including "adult nudity and sexual activity, fake accounts, hate speech, spam, terrorist propaganda, and violence and graphic content," as well as two new categories: bullying and harassment, and child nudity and sexual exploitation of children. The timeline covered is April through September of 2018.

There's a lot to unpack here, but at a high level Facebook says it removed over 1.5 billion fake accounts from April through September, up from the 1.3 billion accounts it removed in the previous six months. If you were wondering just how widespread false content and accounts are on the platform, wonder no more.

While Facebook is able to pull down more than 90 percent of instances of adult nudity and sexual activity, child nudity / sexual exploitation of children, fake accounts, spam, terrorist propaganda and violence and graphic content, there are two categories where its content moderation falls down. Facebook only found and removed 14.9 percent of bullying and harassment before users reported them; it also only found 51.6 percent of hate speech violations before users reported them (timeframe was July through September of this year).

This is the first time Facebook is reporting on bullying and harassment, so we'd expect to see that number rise as the company puts more focus behind it. Similarly, while Facebook has a long way to go to get as good at catching hate speech as it is at catching other bad behavior, it's doing a much better job than it did a year ago. In Q4 of 2017, Facebook only caught 23.6 percent of hate speech prior to it being reported, a number that has more than doubled as of today's report.

Perhaps related to the increased scrutiny Facebook is under, the company reports that government requests for account data increased worldwide by around 26 percent compared to the second half of 2017 (103,815 requests total, up from 82,341). In the US, government requests increased by 30 percent; in 56 percent of those cases, the US government prohibited Facebook from notifying the user.

Given the fire that Facebook has been under since the New York Times report was published yesterday (not to mention everything the company has been dealing with for the past two years), it's not surprising to see it try and release some positive numbers here. But in all likelihood, the furor around the company's continued avoidance of responsibility for the many issues it is linked to will continue. That said, the company continues to make more money and attract more new users, albeit at a slower pace than in the past. Until that trend changes, it's hard to expect Facebook will act any differently than it does right now.