Advertisement

OpenAI's trust and safety lead is leaving the company

The organization is facing an FTC investigation.

ASSOCIATED PRESS

OpenAI’s trust and safety lead, Dave Willner, has left the position, as announced via a Linkedin post. Willner is staying on in an “advisory role” but has asked Linkedin followers to “reach out” for related opportunities. The former OpenAI project lead states that the move comes after a decision to spend more time with his family. Yes, that’s what they always say, but Willner follows it up with actual details.

“In the months following the launch of ChatGPT, I've found it more and more difficult to keep up my end of the bargain,” he writes. “OpenAI is going through a high-intensity phase in its development — and so are our kids. Anyone with young children and a super intense job can relate to that tension.”

He continues to say he’s “proud of everything” the company accomplished during his tenure and noted it was “one of the coolest and most interesting jobs” in the world.

Of course, this transition comes hot on the heels of some legal hurdles facing OpenAI and its signature product, ChatGPT. The FTC recently opened an investigation into the company over concerns that it is violating consumer protection laws and engaging in “unfair or deceptive" practices that could hurt the public's privacy and security. The investigation does involve a bug that leaked users’ private data, which certainly seems to fall under the purview of trust and safety.

Willner says his decision was actually a “pretty easy choice to make, though not one that folks in my position often make so explicitly in public.” He also states that he hopes his decision will help normalize more open discussions about work/life balance.

There's growing concerns over the safety of AI in recent months and OpenAI is one of the companies that agreed to place certain safeguards on its products at the behest of President Biden and the White House. These include allowing independent experts access to the code, flagging risks to society like biases, sharing safety information with the government and watermarking audio and visual content to let people know that it’s AI-generated.

This article contains affiliate links; if you click such a link and make a purchase, we may earn a commission.