Advertisement

Meta reportedly won't make its AI advertising tools available to political marketers

The experimental features can adjust images, generate backgrounds and automatically write video captions.

zz/DJ/AAD/STAR MAX/IPx

Facebook is no stranger to moderating and mitigating misinformation on its platform, having long employed machine learning and artificial intelligence systems to help supplement its human-led moderation efforts. At the start of October, the company extended its machine learning expertise to its advertising efforts with an experimental set of generative AI tools that can perform tasks like generating backgrounds, adjusting image and creating captions for an advertiser's video content. Reuters reports Monday that Meta will specifically not make those tools available to political marketers ahead of what is expected to be a brutal and divisive national election cycle.

Meta's decision to bar the use of generative AI is in line with much of the social media ecosystem, though, as Reuters is quick to point out, the company, "has not yet publicly disclosed the decision in any updates to its advertising standards." TikTok and Snap both ban political ads on their networks, Google employs a "keyword blacklist" to prevent its generative AI advertising tools from straying into political speech and X (formerly Twitter) is, well, you've seen it.

Meta does allow for a wide latitude of exceptions to this rule. The tool ban only extends to "misleading AI-generated video in all content, including organic non-paid posts, with an exception for parody or satire," per Reuters. Those exceptions are currently under review by the company's independent Oversight Board as part of a case in which Meta left up an "altered" video of President Biden because, the company argued, it was not generated by an AI.

Facebook, along with other leading Silicon Valley AI companies, agreed in July to voluntary commitments set out by the White House enacting technical and policy safeguards in the development of their future generative AI systems. Those include expanding adversarial machine learning (aka red-teaming) efforts to root out bad model behavior, sharing trust and safety information both within the industry and with the government, as well as development of a digital watermarking scheme to authenticate official content and make clear that it is not AI-generated.