Google reportedly won’t renew its controversial military AI contract

Project Maven has attracted a lot of pushback.

The controversial government contract that led thousands of Google employees to sign a petition in opposition and dozens to quit in protest will not be renewed, Gizmodo reports. Project Maven has been billed by Google as a small, "non-offensive" deal through which it would provide open-source AI software to the Pentagon that could help the military flag drone images requiring further human review. But the project has been decried by many of the company's employees who believe it could hurt efforts to hold the public's trust and went against Google's "Don't Be Evil" motto.

According to three individuals who attended a weekly Google meeting this morning, Google Cloud CEO Diane Greene announced that the Project Maven contract would not be renewed when it expires next year. She said the backlash over the deal had been bad for the company and that the contract was pursued during a time when the company was actively seeking military work.

Internal emails obtained by Gizmodo showed that Google's plans for the project may not have been as low-key as the company wanted people to think. Google reportedly put at least 10 employees on the project, viewed the deal as a gateway for future military and intelligence contracts and sought and received security authorizations that would allow it to work on additional government contracts. The Project Maven contract is also apparently worth more than Google executives once said, pulling in around $15 million instead of the $9 million that was previously reported. Its budget also had the possibility of growing to as much as $250 million. Additionally, emails show that Google planned to build a surveillance system for the Pentagon that would let analysts "click on a building and see everything associated with it."

Earlier this week, Google said it was working on a set of ethical principles meant to guide its AI work with the military. It also said it would not allow its work to be used for the development of weaponry. Greene reportedly said during Google's meeting today that the ethics of AI is an important conversation and one that Google is at the forefront of. "It is incumbent on us to show leadership," she said, according to a Gizmodo source.