And in at least one case, the suggestions helped recruiting efforts. An Indonesian ISIS backer sent a friend request to a non-religious New Yorker in March 2017, leading the man to support the group in the space of six months.
Facebook has been taking action, but the report noted that less than half the accounts had vanished in a six-month period. The company would also remove posts identified as terrorist content, but didn't always ban the user in question. One terrorism suspect in the UK managed to get his account reinstated nine times despite posting propaganda videos.
A company spokesman stressed that their current approach "is working," with 99 percent of Al Qaeda- and ISIS-oriented content removed automatically. At the same time, it acknowledged that there's "no easy technical fix" and that it would "continue to invest" in both human reviewers and tech to catch extremist material.
The situation should improve, then, but this illustrates one of the central problems with curbing extremism online: the same features that help you stay in touch can inadvertently fuel terrorists by linking them to similarly vicious people. It also illustrates how far internet behemoths like Facebook have to go in fighting terrorism. Although there have certainly been improvements, there are still areas where extremists can spread their message.