Last June, Facebook described how it uses AI to help find and take down terrorist content on its platform and in November, the company said that its AI tools had allowed it to remove nearly all ISIS- and Al Qaeda-related content before it was flagged by a user. Its efforts to remove terrorist content with artificial intelligence came up frequently during Mark Zuckerberg's Congressional hearings earlier this month and the company's lead policy manager of counterterrorism spoke about the work during SXSW in March. Today, Facebook gave an update of that work in an installment of its Hard Questions series.
Facebook defines terrorism as, "Any non-governmental organization that engages in premeditated acts of violence against persons or property to intimidate a civilian population, government or international organization in order to achieve a political, religious or ideological aim." And it notes that governments aren't included due to a "general academic and legal consensus that nation-states may legitimately use violence under certain circumstances." The company said that detection technology has been instrumental to its success in rooting out terrorist content as has its counterterrorism team, which grew from 150 people to 200 since June.
Focusing on ISIS and Al-Qaeda -- since they pose the greatest global threat, says Facebook -- the company took action on 1.9 million pieces of content in the first quarter of 2018, twice as much as the last quarter of 2017. The report also notes that 99 percent of that content was spotted without a user having to report it, with both technology and internal reviewers contributing to that rate.
Along with removing content, the speed at which it's found and taken down is also important. Facebook says that for newly uploaded content, it was able to be identified in less than one minute on average during Q1. Quick removal of terrorist content has been a focus of European guidelines regarding social media sites.
Additionally, Facebook is also working on finding old content. It has designed tools specifically geared towards finding older content and in Q1, 600,000 pieces of terrorism-related content were removed through these means.
"We're under no illusion that the job is done or that the progress we have made is enough," said Facebook. "Terrorist groups are always trying to circumvent our systems, so we must constantly improve. Researchers and our own teams of reviewers regularly find material that our technology misses. But we learn from every misstep, experiment with new detection methods and work to expand what terrorist groups we target."