It's increasingly clear that some of Facebook's methods for downplaying fake news are more effective than others. The social site has ended an experiment where it tried prioritizing comments that accused stories of being fake, theoretically challenging fake news and other sketchy articles. A Facebook spokesperson talking to the BBC didn't explain the exact reasons behind discontinuing the trial (it's described as a "small test which has now concluded"). However, some of the publicly available examples suggest that it was just too indiscriminate.
Users included in the test noted that the system was simply promoting comments that included keywords like "fake" or "lie," regardless of what the comment was saying. It wasn't picky about the source stories, either, so you'd see these incredulous statements highlighted on trustworthy articles. How are you supposed to trust Facebook's judgment if it isn't scrutinizing the content of the stories themselves?
This isn't the end of the experiments. The spokesperson said the company will "keep working to find new ways" to fight misinformation online. Facebook knows there's a lot of work left to do, in other words -- it'll take a while before it can reliably discredit the right stories.