He explained that while the platform's AI can quickly detect videos containing suicidal or harmful acts, the shooter's stream didn't trigger it. To be able to train the matching AI to detect that specific type of content, the platform needs big volumes of training data. As Facebook explains, something like that is difficult to obtain as "these events are thankfully rare." Also, none of those who've watched the live broadcast reported it -- the first user report came in 29 minutes after the broadcast began and 12 minutes after the live ended. To be fair, however, the live was only viewed fewer than 200 times, while the original video was watched 4,000 times overall.
Rosen also explained why over 300,000 copies were able to circulate on the platform after Facebook's system already detected and removed 1.2 million copies of the video upon upload. He said there was a "core community of bad actors" that continually re-uploaded edited versions of the video. By tweaking it a bit and not uploading an identical copy of the original, they were able to circumvent the platform's filters. Some even played the original on their computers and then recorded it on their phones. In all, Facebook was able to detect over 800 variants of the video, each one visually distinct.
To be able to prevent similar videos from circulating in the future, Facebook plans to improve its matching AI by giving it audio-based detection powers, among other things. It also needs to ensure the AI can clearly differentiate between similar content and livestreamed video games. In addition, Facebook is exploring more ways on how it can use AI to detect live broadcasts like that much faster, as well as how it can address user reports more quickly.