Why you can trust us

Engadget has been testing and reviewing consumer tech since 2004. Our stories may include affiliate links; if you buy something through a link, we may earn a commission. Read more about how we evaluate products.

DeepMind, Elon Musk and others pledge not to make autonomous AI weapons

They also call on governments to institute laws against such technology.

Today during the Joint Conference on Artificial Intelligence, the Future of Life Institute announced that more than 2,400 individuals and 160 companies and organizations have signed a pledge, declaring that they will "neither participate in nor support the development, manufacture, trade or use of lethal autonomous weapons." The signatories, representing 90 countries, also call on governments to pass laws against such weapons. Google DeepMind and the Xprize Foundation are among the groups who've signed on while Elon Musk and DeepMind co-founders Demis Hassabis, Shane Legg and Mustafa Suleyman have made the pledge as well.

The pledge comes as a handful of companies are facing backlash over their technologies and how they're providing them to government agencies and law enforcement groups. Google has come under fire for its Project Maven Pentagon contract, which is providing AI technology to the military in order to help them flag drone images that require additional human review. Similarly, Amazon is facing criticism for sharing its facial recognition technology with law enforcement agencies while Microsoft has been called out for providing services to Immigration and Customs Enforcement (ICE).

"Thousands of AI researchers agree that by removing the risk, attributability and difficulty of taking human lives, lethal autonomous weapons could become powerful instruments of violence and oppression, especially when linked to surveillance and data systems," says the pledge. It adds that those who sign agree that "the decision to take a human life should never be delegated to a machine."

"I'm excited to see AI leaders shifting from talk to action, implementing a policy that politicians have thus far failed to put into effect," Future of Life Institute President Max Tegmark said in a statement. "AI has huge potential to help the world -- if we stigmatize and prevent its abuse. AI weapons that autonomously decide to kill people are as disgusting and destabilizing as bioweapons, and should be dealt with in the same way."

Google has already released its own set of principles, the purpose of which is guide the company's ethics on AI technology. Its policy states that it won't design or deploy AI for use in weapons, surveillance or technology "whose purpose contravenes widely accepted principles of international law and human rights." Microsoft has stated that its work with ICE is limited to email, calendar, messaging and document management, and doesn't include any facial recognition technology. The company is also working on a set of guiding principles for its facial recognition work.

In 2015, Musk donated $10 million to the Future of Life Institute for a research program focused on ensuring AI will be beneficial to humanity. And last year, Musk, Hassabis and Suleyman signed a Future of Life Institute letter sent to the UN that sought regulation of autonomous weapons systems.