Advertisement

Google: Our AI won't be a weapon

A new manifesto by CEO Sundar Pichai explains what they'll allow their AI to do.

Google has been in hot water for the last few months as details about its partnership with the US military revealed the tech titan's involvement in a clandestine, and potentially indirectly violent, program. After internal and external backlash, the company backed out of the project last week. Today, Google CEO Sundar PIchai published a new policy in response that lays out the company's ethos: From now on, it won't design or deploy AI for weapons, surveillance purposes or technology "whose purpose contravenes widely accepted principles of international law and human rights."

Of course, like that last phrase, there's a bit of wiggle room in the text. Pichai's post lays out the company's priorities in its AI research -- it will strive to pursue applications that benefit the public good, don't reinforce bias, incorporate privacy design and other aspirations. The list of prohibited AI uses is exact with weapons and surveillance, but allows a for-the-greater-good justification for material harm. So long as they obey safety practices, "we will proceed only where we believe that the benefits substantially outweigh the risks."

Project Maven used Google's AI research to process drone footage, which may have been intended to help with military targeting. As a result, engineers petitioned the company and some reportedly quit in protest before the tech titan announced it wouldn't renew their involvement in the project. And yet, the newly-outlined policies don't close the company off from working with the government: The company will still work with them for projects including cybersecurity, training, military recruitment, veterans' healthcare, and search and rescue, Pichai wrote.