Advertisement

UK government has its own AI for detecting extremist videos

And it could force tech companies to use it.

UK government has its own AI for detecting extremist videos

Stemming the tide of extremist online content has been a major focus for tech giants in recent times, but despite their efforts, the UK parliament has condemned companies such as Facebook, YouTube and Twitter for "consciously failing" to take robust enough action. To mitigate the problem, the Home Office has developed its own AI program that can detect Islamic State (IS) propaganda online with a 99.99 percent success rate.

The technology works by analyzing video content during the upload process, preventing it from reaching the internet in the first place -- a vast improvement on the average 36 hours it takes tech firms to remove extremist content, and an improvement still on the two-hour-limit the UK government demanded last year. According to the Home Office, it automatically detects 94 percent of IS propaganda with 99.99 percent accuracy and will be made available to all internet platforms, particularly smaller sites such as Vimeo and pCloud, which have seen an increase in IS propaganda. According to the Home Office, IS supporters used more than 400 unique online platforms to spread propaganda in 2017.

The tool's development is the result of the government's impatience with tech companies -- it's demonstrated what's possible in a bid to strong-arm larger firms into taking meaningful action and to help smaller companies that don't have the resources required to tackle the problem. And Home Secretary Amber Rudd says she hasn't ruled out forcing companies to use the technology. Speaking to the BBC she said, "We're not going to rule out taking legislative action if we need to do it, but I remain convinced that the best way to take real action, to have the best outcomes, is to have an industry-led forum like the one we've got. This has to be in conjunction, though, of larger companies working with smaller companies."

However, the technology has been met with criticism from some quarters. Open Rights Group, for example, raises the question of legal accountability for content removal. In a blog post, campaigner Jim Killock writes, "We need to be worried about the unwanted consequences of machine takedowns. Firstly, we are pushing companies to be the judges of legal and illegal. Secondly, all systems make mistakes and require accountability for them; mistakes need to be minimised, but also rectified."

The Home Office has not publicly detailed the methodology behind the video assessment, but says that of one million randomly-selected videos, only 50 would require additional human review. Bearing in mind that Facebook has around two billion users, that could add up to a significant volume of (potentially unjustly) flagged content every day, which is another factor tech giants have taken into account in their own automated systems. Last year the Global Internet Forum to Counter Terrorism saw the likes of Google, Twitter and Facebook come together to discuss classification techniques, engineering solutions and reporting, with these kinds of false positives in mind.

In a statement given to TechCrunch, a Facebook spokesperson said it shares the goals of the Home Office and that its current approach is working, "but there is no easy technical fix to fight online extremism". However, the development of this new tool suggests the Home Office disagrees, and that tech firms need to do more to combat the issue, or risk being forced into taking action.