Advertisement

Meta approved hate-filled Facebook ads that called for violence in Europe

The ads, which were bought by a watchdog group, never ran.

SOPA Images via Getty Images

Meta is again facing allegations it’s not doing enough to prevent the spread of hate speech and violent content in Facebook ads. A new report details eight such ads, targeting audiences in Europe, that were approved despite containing blatant violations of the company’s policies around hate speech and violence.

The report comes from watchdog organization Ekō, which is sharing its work to draw attention to the social network’s “sub-standard moderation practices” ahead of the Digital Services Act (DSA) going into effect in Europe later this week. It details how, over a period of a few days in early August, the organization attempted to buy 13 Facebook ads, all of which used AI-generated images and included text that was clearly against the company’s rules

Ekō pulled the ads before they could be seen by any users. The group requested exact wording of the ads be withheld, but offered descriptions of some of the most egregious examples. Approved ads included one, placed in France, that “called for the execution of a prominent MEP because of their stance on immigration,” as well as an ad targeting German users that “called for synagogues to be burnt to the ground to ‘protect White Germans.’” Meta also approved ads in Spain that claimed the most recent election was stolen and that people should engage in violent protests to reverse it.

“This report was based on a very small sample of ads and is not representative of the number of ads we review daily across the world," a spokesperson for Meta said in a statement. "Our ads review process has several layers of analysis and detection, both before and after an ad goes live. We’re taking extensive steps in response to the DSA and continue to invest significant resources to protect elections and guard against hate speech as well as against violence and incitement.”

While there were a handful of ads that were stopped by Meta’s checks, Ekō says that the ads were prevented from running because they were flagged as political, not because of the violent and hate-filled rhetoric in them. (The company requires political advertisers to go through an additional vetting process before they are eligible to place ads.)

Ekō is using the report to advocate for additional safeguards under the DSA, a sweeping law that requires tech platforms to limit some kinds of targeted advertising and allow users to opt out of recommendation algorithms. (Several services, including Facebook, Instagram and TikTok have recently made changes to comply with the latter provision.) It also requires platforms to identify and mitigate "systemic risks," including those related to illegal and violent content.

“With a few clicks, we were able to prove just how easy it is for bad actors to spread hate speech and disinformation,” Vicky Wyatt, Ekō’s campaign director, said in a statement. “With EU elections around the corner, European leaders must enforce the DSA to its fullest extent and finally rein in these toxic companies.”