Advertisement

Google says its military AI work will be guided by ethical principles

It won’t provide AI technology for weaponry.

Google's Pentagon contract and its involvement with the military's Project Maven has stirred controversy both outside of and within the company. Its plan to provide AI technology that can help flag drone images for human review has led to an internal petition signed by thousands of employees who oppose the decision as well as a number of resignations. Now, the New York Times reports that Google is working on a set of guidelines aimed at steering the company's decisions regarding defense and intelligence contracts.

Last week, CEO Sundar Pichai told employees that the company wanted to develop principles that "stood the test of time," according to those present for his remarks, and Google told the New York Times that those guidelines would prohibit the use of AI in weaponry. How they will do that is currently unclear, but employees said they expected the principles to be announced internally within the next few weeks.

Whatever these guidelines turn out to be, it wouldn't be surprising to see a continued backlash to the company's contract with the Pentagon. Banning collaboration on weaponized AI may not be enough to quell concerns if there's a chance that the "non-offensive" involvement, as Google calls it, could lead to offensive actions, such as drone strikes. "Amid growing fears of biased and weaponized AI, Google is already struggling to keep the public's trust," the Google petition read. "The argument that other firms, like Microsoft and Amazon, are also participating doesn't make this any less risky for Google. Google's unique history, its motto Don't Be Evil, and its direct reach into the lives of billions of users set it apart."

Earlier this month an employee who left Google due to the contract told Gizmodo, "I wasn't happy just voicing my concerns internally. The strongest possible statement I could take against this was to leave."