Advertisement

UK: Facebook, Google, Twitter 'consciously failing' on terrorism

Social media is "the vehicle of choice in spreading propaganda."

The UK parliament has slammed Facebook, Twitter and YouTube for "consciously failing" to remove terrorism recruitment content. According to a report from the Home Affairs Committee, the social networks are "the vehicle of choice in spreading propaganda and the recruiting platforms for terrorism." In statements to the WSJ, the companies denied that they are lax with extremist postings. "We deal swiftly and robustly with reports of terrorism-related content," a Facebook spokesperson said.

The committee based its report on statements from intelligence groups, the Muslim community, counter-terrorism experts and security specialists. Other experts told the WSJ that the document is misleading, saying terrorists are more likely to recruit via heavily encrypted messaging services like WhatsApp and Telegram -- apps that are also in the US and UK governments' crosshairs.

These companies have teams of only a few hundred employees to monitor networks of billions of accounts and Twitter does not even proactively report extremist content to law enforcement agencies.

The role of online networks in abetting terrorism has been a hot topic of late -- Twitter said it recently suspended 360,000 terrorism-related accounts since the beginning of the year. The lawmakers think that, given their billions in revenues, the firms aren't doing enough, though. "These companies have teams of only a few hundred employees to monitor networks of billions of accounts and Twitter does not even proactively report extremist content to law enforcement agencies," the report states.

The Home Affairs committee wants social websites to take a "zero tolerance approach to online extremism," and recommended laws that would force social networks to quickly remove terrorist propaganda and inform law enforcement. (While the group's rulings are non-binding, they heavily influence UK's parliament.) The European Union recently secured commitments from Facebook, Twitter, Microsoft and Google to put hate speech policies in place and remove and report content within 24 hours.