A public challenge could put a temporary stop to the deployment of ChatGPT and similar AI systems. The nonprofit research organization Center for AI and Digital Policy (CAIDP) has filed a complaint with the Federal Trade Commission (FTC) alleging that OpenAI is violating the FTC Act through its releases of large language AI models like GPT-4. That model is "biased, deceptive" and threatens both privacy and public safety, CAIDP claims. Likewise, it supposedly fails to meet Commission guidelines calling for AI to be transparent, fair and easy to explain.
The Center wants the FTC to investigate OpenAI and suspend future releases of large language models until they meet the agency's guidelines. The researchers want OpenAI to require independent reviews of GPT products and services before they launch. CAIDP also hopes the FTC will create an incident reporting system and formal standards for AI generators.
We've asked OpenAI for comment. The FTC has declined to comment. CAIDP president Marc Rotenberg was among those who signed an open letter demanding that OpenAI and other AI researchers pause work for six months to give time for ethics discussions. OpenAI founder Elon Musk also signed the letter.
Critics of ChatGPT, Google Bard and similar models have warned of problematic output, including inaccurate statements, hate speech and bias. Users also can't repeat results, CAIDP says. The Center points out that OpenAI itself warns AI can "reinforce" ideas whether or not they're true. While upgrades like GPT-4 are more reliable, there's a concern people may rely on the AI without double-checking its content.
There's no guarantee the FTC will act on the complaint. If it does set requirements, though, the move would affect development across the AI industry. Companies would have to wait for assessments, and might face more repercussions if their models fail to meet the Commission's standards. While this might improve accountability, it could also slow the currently rapid pace of AI development.