Current methods for filtering out terrorist content are still quite limited, and a recent discovery makes that all too clear. Motherboard and the Global Intellectual Property Enforcement Center's Eric Feinberg have discovered that variants of the Christchurch mass shooter's video were available on Facebook 36 days after the incident despite Facebook's efforts to wipe them from the social network. Some of them were trimmed to roughly a minute, but they were all open to the public -- you just had to click a "violent or graphic content" confirmation to see them. Others appeared to dodge filtering attempts by using screen captures instead of the raw video.
One variation had been around since the time of the attack. All of the videos Feinberg found were sitting on Arabic-language pages.
Facebook has removed one of the videos as of this writing, and reiterated its plans to improve its filtering technology. It's using audio recognition to spot clips that might otherwise evade filters, and it's researching tech that could identify edited versions of clips.
At the moment, though, the findings illustrate the challenges of completely removing terrorist material. It's difficult to account for every possible variation of a video, especially if posters are deliberately evading filters. That, in turn, raises questions about laws that would punish companies for failing to remove extremist material. Would Facebook be held responsible if authorities found videos that slipped through the cracks? While it's doubtful internet giants would face significant punishment for mistakes, the laws may set a bar that current technology can't clear.