Advertisement

Anthropic, Google, Microsoft and OpenAI form an AI safety group

The Frontier Model Forum says it will focus on secure, careful AI development.

Elizabeth Frantz / reuters

It's no secret that AI development brings a lot of security risks. While governing bodies are working to put forth regulations, for now, it's mostly up to the companies themselves to take precautions. The latest show of self-supervision comes with Anthropic, Google, Microsoft and Open AI's joint creation of the Frontier Model Forum, an industry-led body concentrating on safe, careful AI development. It considers frontier models to be any "large-scale machine-learning models" that go beyond current capabilities and have a vast range of abilities.

The Forum plans to establish an advisory committee, charter and funding. It has laid out for core pillars it intends to focus on furthering AI safety research, determining best practices, working closely with policymakers, academics, civil society and companies, and encouraging efforts to build AI that "can help meet society's greatest challenges."

Members will reportedly work on the first three objectives over the next year. Speaking of membership, the announcement outlines the necessary qualifications to join, such as producing frontier models and showing a clear commitment to making them safe. "It is vital that AI companies–especially those working on the most powerful models–align on common ground and advance thoughtful and adaptable safety practices to ensure powerful AI tools have the broadest benefit possible," Anna Makanju, OpenAI's vice president of global affairs, said in a statement. "This is urgent work and this forum is well-positioned to act quickly to advance the state of AI safety."

The creation of the Forum follows a recent safety agreement between the White House and top AI companies, including those responsible for this new venture. Safety measures committed to included tests for bad behavior by external experts and putting a watermark on content AI created.