Three of the four workers were at Facebook during the June 2015 to May 2017 time frame during which these ads hit the social network. According to them, each used screening software to evaluate specific portions of an advertisement queued up for review, perhaps vetting images or text segments but never looking at the whole ad. The workers typed in key codes to tag each segment with descriptors, which enabled them to quickly but not critically sift through material, the anonymous sources told The Verge. They were on the lookout for sexually explicit or violent material as well as scams, not subtle attempts to influence the election.
Or, as one worker told The Verge, "They weren't screening for, like, propaganda or anything." They were looking more for marketing attempts capitalizing on fear to sell products, not change opinions.
The Russian-backed ads had some criteria that could've potentially raised red flags among reviewers, but the massive volume of advertisements flowing in daily meant there was room for the ones in question to sneak by. An algorithm was also used to screen ads, which the workers were constantly training, they told The Verge; Conceivably, it could have approved all the Russian-backed content.
We've reached out to Facebook for comment and will include it when we hear back.