Facebook now flags fake news

It's not a particularly effective solution, but it's a start.

NurPhoto via Getty Images

After taking heat for months in the run up to the presidential elections, Facebook has been cracking down on fake news spreading through its social network. The company recently began using third-party fact-checkers and gave its users the ability to manually report fake news posts. Late last week, the company announced that it will soon include "Disputed" labels for these false reports as well.

Facebook originally promised to do this back in December (along with the fact checkers, curated articles and manual flagging). Under this system, bogus posts from disreputable sites will still show up in your timeline, but they'll be accompanied by a small warning banner. These banners are applied after a lengthy vetting process. First, the fake post either has to be flagged by a certain number of users or the company's automated software. The post is then sent to a fact-checking website like Snopes or Politifact where it is reviewed. If two or more fact-checkers flag it again, Facebook will apply the banner.

This process is time-intensive if nothing else. In a recent case reported by Gizmodo, a fake news post from a satirical entertainment website which suggested that it was Trump's own Android phone that's responsible for the recent spate of leaks remained unlabeled for nearly five days after being initially posted.

And even when accurately labeled, noting that something is "disputed" rather than "false" does little to change the minds of people who already reject reporting by mainstream news sources in favor of fringe conspiracy sites like Infowars or Breitbart. So, we'll have to wait and see if this new labelling scheme makes a difference in the tenor of discourse on Facebook's network or whether it will be another bust like the site's crackdown on private gun sales.