OpenAI endorses the Kids Online Safety Act
OpenAI, which is currently facing a raft of lawsuits over alleged safety lapses in ChatGPT, has endorsed the Kids Online Safety Act (KOSA). The company said that its endorsement was part of a broader commitment to create "AI-specific rules" for kids safety.
OpenAI's endorsement comes as KOSA, which passed the Senate in 2024, appears to be gaining some momentum. KOSA, which was first introduced in 2022, is one of several online safety bills that would require social media companies and other online platforms to implement stronger protections for children. The bill has been revised a number of times, but the current version includes a requirement for social media apps to allow minors to opt out of "addictive" features and algorithmic recommendations. Online platforms also have a "duty of care" to mitigate mitigate harmful content that promotes eating disorders, suicide and sexual exploitation.
Apple, Microsoft, Snap and X have also endorsed the bill. NetChoice, a trade group whose members include Meta and other platforms, has said the measure would enable censorship without making kids safer online. Privacy and digital rights groups, like the Electronic Frontier Foundation, also oppose the bill.
Though KOSA has mainly been discussed in the context of social media platforms, OpenAI says the bill is "complementary" to the safety work it's already doing. "We can't repeat the mistakes made during the rise of social media, when stronger safeguards for teens weren't put in place until the platforms were already deeply embedded in young people's lives," OpenAI's Chief Global Affairs Officer Chris Lehane said in a statement.
OpenAI is currently facing a number of lawsuits related to its own track record on safety. The company has been sued for wrongful death by the family of a teen who died by suicide after allegedly discussing his plans with the chatbot. Another family recently filed a similar suit saying that their teen accidentally overdosed on drugs following shoddy medical advice from ChatGPT.