Advertisement

OpenAI is forming a team to rein in superintelligent AI

The announcement comes as governments debate how to regulate AI technologies.

Elizabeth Frantz / reuters

OpenAI is forming a dedicated team to manage the risks of superintelligent artificial intelligence. A superintelligence is a hypothetical AI model that is smarter than even the most gifted and intelligent human, and excels at multiple areas of expertise instead of one domain like some previous generation models. OpenAI believes such a model could arrive before the end of the decade. “Superintelligence will be the most impactful technology humanity has ever invented, and could help us solve many of the world’s most important problems,” the non-profit said. “But the vast power of superintelligence could also be very dangerous, and could lead to the disempowerment of humanity or even human extinction.”

The new team will be co-lead by OpenAI Chief Scientist Ilya Sutskever and Jan Leike, the research lab's head of alignment. Additionally, OpenAI said it would dedicate 20 percent of its currently secured compute power to the initiative, with the goal of developing an automated alignment researcher. Such a system would theoretically assist OpenAI in ensuring a superintelligence is safe to use and aligned with human values. “While this is an incredibly ambitious goal and we’re not guaranteed to succeed, we are optimistic that a focused, concerted effort can solve this problem,” OpenAI said. “There are many ideas that have shown promise in preliminary experiments, we have increasingly useful metrics for progress, and we can use today’s models to study many of these problems empirically.” The lab added it would share a roadmap in the future.

Wednesday’s announcement comes as governments around the world consider how to regulate the nascent AI industry. In the US, Sam Altman, the CEO of OpenAI, has met with at least 100 federal lawmakers in recent months. Publicly, Altman has said AI regulation is “essential,” and that OpenAI is “eager” to work with policymakers. But we should be skeptical of such proclamations, and indeed efforts like OpenAI’s Superalignment team. By focusing the attention of the public on hypothetical risks that may never materialize, organizations like OpenAI shift the burden of regulation to the horizon instead of the here and now. There are much more immediate issues around the interplay between AI and labor, misinformation and copyright policymakers need to tackle today, not tomorrow.