Advertisement

FTC warns tech companies against AI shenanigans that harm consumers

The commission will not be fooled by old scams leveraging new tech.

NurPhoto via Getty Images

Since its establishment in 1914, the US Federal Trade Commission has stood as a bulwark against the fraud, deception, and shady dealings that American consumers face every day — fining brands that "review hijack" Amazon listings, making it easier to cancel magazine subscriptions and blocking exploitative ad targeting. On Monday, Michael Atleson, Attorney, FTC Division of Advertising Practices, laid out both the commission's reasoning for how emerging generative AI systems like ChatGPT, Dall-E 2 could be used to violate the FTC Act's spirit of unfairness, and what it would do to companies found in violation.

"Under the FTC Act, a practice is unfair if it causes more harm than good," Atleson said. "It’s unfair if it causes or is likely to cause substantial injury to consumers that is not reasonably avoidable by consumers and not outweighed by countervailing benefits to consumers or to competition."

He notes that the new generation of chatbots like Bing, Bard and ChatGPT can be used to influence the user's, "beliefs, emotions, and behavior." We've already seen them employed as negotiators within Walmart supply network and as talk therapists, both occupations specifically geared towards influencing those around you. When combined with the common effects of automation bias, wherein users more readily the accept the word of a presumably impartial AI system, and anthropomorphism. "People could easily be led to think that they’re conversing with something that understands them and is on their side," Atleson argued.

He concedes that the issues surrounding generative AI technology go far beyond the FTC's immediate purview, but reiterates that it will not tolerate unscrupulous companies from using it to take advantage of consumers. "Companies thinking about novel uses of generative AI, such as customizing ads to specific people or groups," the FTC lawyer warned, "should know that design elements that trick people into making harmful choices are a common element in FTC cases, such as recent actions relating to financial offers, in-game purchases, and attempts to cancel services."

The FTC's guardrails also apply to placing ads within a generative AI application, not unlike how Google inserts ads into its search results. "People should know if an AI product’s response is steering them to a particular website, service provider, or product because of a commercial relationship," Atleson wrote. "And, certainly, people should know if they’re communicating with a real person or a machine."

Finally, Atleson leveled an unsubtle warning to the tech industry. "Given these many concerns about the use of new AI tools, it’s perhaps not the best time for firms building or deploying them to remove or fire personnel devoted to ethics and responsibility for AI and engineering," he wrote. "If the FTC comes calling and you want to convince us that you adequately assessed risks and mitigated harms, these reductions might not be a good look." That's a lesson Twitter already learned the hard way.

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.