There's been a lot of discussion about fake news, how it spreads on social networks and how it impacts behaviors like political decisions. But there hasn't really been an in-depth look into how true and false information spreads on sites like Facebook or Twitter, nor has there been an analysis on how big of a role bots play in that spread. Well three researchers at MIT have just published a study in Science that does just that and their work finds that false information spreads faster, further, deeper and more broadly than true information and that humans, not bots, are to blame.
The researchers looked at a broad swath of news stories and rumors shared on Twitter from its launch in 2006 up to 2017 and overall they analyzed the spread of approximately 126,000 stories tweeted by three million people over 4.5 million times. The stories they used were ones that had been verified or debunked by six fact-checking organizations -- Snopes, Politifact, FactCheck.org, TruthOrFiction.com, Hoax-Slayer and UrbanLegends.about.com. They they took those stories, deemed either true or false, and looked at how they spread on Twitter. Specifically, they analyzed cascades, or retweet chains, sharing the information. Here's how the researchers describe the cascades. If a story is tweeted by 10 people separately, none of which are retweeted, it has 10 cascades of size one. But if two people tweet it and are each retweeted 100 times, the story has two cascades of size 100. Therefore a cascade consists of a separately shared story and its size is determined by how many times the original tweet is shared.
The team found that while true story cascades rarely reached 1,000 people, the top false news stories often reached between 1,000 and 100,000 people. False news also spread faster, with true news taking six times longer to reach 1,500 people. And original false stories were able to get a chain of retweets 19 users deep 10 times faster than a true story could get a chain of 10. False stories were also retweeted by more users overall than true stories and the researchers determined that false stories were 70 percent more likely to be retweeted than those that were true.
Additionally, of the different types of stories -- which include politics, urban legends, business, terrorism, science, entertainment and natural disasters -- false political ones spread further and faster than other types of false stories. And while it would make sense that accounts with more followers, that tweeted more often, that were more likely to be verified or had been on Twitter longer would contribute more largely to the spread of false stories, the researchers found the exact opposite to be true. Those spreading false news had fewer followers, followed fewer people, were less active, were verified less frequently and were on Twitter for less time than those who spread true stories.
Further, since the selected stories were limited to what had been checked by the six fact-checking groups, the researchers were aware that the story selection may be biased towards particular types of subjects. So they took a whole new set of stories that hadn't been analyzed by those groups and had three undergraduates fact check them. They then analyzed their spread as they did with the previous set of stories and the results were nearly identical to their other findings.
And lastly, in order to determine the effect of bots, the researchers, who initially removed bot-generated content from the analysis, added it back in and found that nothing changed. False stories still spread farther, faster, deeper and more broadly than true stories, meaning humans are the main contributors to its spread, not bots.
More research needs to be done to figure out why false news spreads the way that it does, but the team suggests that novelty may play a role. They found novel news items were more likely to be tweeted and false news stories were more likely to be novel, therefore novelty could contribute to the quickness and depth with which false stories spread.
The study's findings are interesting, particularly in light of recent events. Though how they can be used to inform efforts to combat the spread of fake news will likely be a much more challenging task. "Understanding how false news spreads is the first step toward containing it," the authors write. "We hope our work inspires more large-scale research into the causes and consequences of the spread of false news as well as its potential cures."