Fake news isn't just an American problem, or a political problem, or a Facebook problem or a Twitter problem. Yesterday, the Washington Post reports five people were killed in a lynching in India spurred by online rumors of child trafficking. The Times of India said police believe that because of rumors traffickers were active in the area, villagers killed the group after one of them spoke to a child. It's the latest in a series of violent incidents in the country that have left twelve dead over the last month, all connected to fake messages on social media, which have mainly spread through the messaging service WhatsApp.
Most of the perpetrators are villagers and many are using smartphones for the first time who are incited to violence by rumors suggesting certain persons are organ or child traffickers. Local authorities have attempted to combat the spread of fake news by warning the populace and even paying street performers and 'rumor busters' to visit villages and preach caution. While addressing a crowd, one of the latter died at the hands of a mob last Thursday.
This is far from the first violence caused by the spread of fake news online. But it is alarming to see so many deaths caused by those new to social media who don't know to be skeptical of scams and deception. And it's happened before, with horrific consequences.
The surging public sentiment and homicidal violence that pushed 650,000 of Myanmar's Muslim Rohingya minority out of the country was fueled by hate speech that spread online like wildfire, primarily through Facebook. Less than one percent of its population had internet access in 2014, but today, a quarter of the country's 53 million people use Facebook. Given its quick adoption and use by the government to convey public messages, it's no wonder that UN human rights experts believe the social network played a role in the spread of hate speech, according to Reuters.
In the last few days, the Facebook-owned WhatsApp has given group administrators control over which members can post messages. But it's harder to monitor what users are saying in those messages, given the service's end-to-end encryption.
"WhatsApp is working to make it clear when users have received forwarded information and provide controls to group administrators to reduce the spread of unwanted messages in private chats," WhatsApp spokesperson Carl Woog told The Washington Post. "We've also seen people use WhatsApp to fight misinformation, including the police in India, news organizations and fact checkers. We are working with a number of organizations to step up our education efforts so that people know how to spot fake news and hoaxes circulating online."
Due to multiple violations of our rules and guidelines, the comment section to this article is now closed. Comments that are off-topic from the article and comments that are insulting to other users are against our rules and guidelines. Please treat your fellow commenters with respect and civility.