Advertisement

Can Google keep its promises on building ethical AI?

The company certainly talks a good game.

Google's collaboration with the Department of Defense to develop AI system's for the US military's fleet of war drones, dubbed Project Maven, proved a double-edged sword for the technology company. On one hand, the DoD contract was quite lucrative, worth as much as $250 million annually.

On the other hand, public backlash to the news that the company was helping the government build more efficient killing machines was immediate, unwavering and utterly ruthless. A dozen employees quit the company in protest, another 4,000 petitioned management to terminate the contract outright. The uproar was so deafening that Google had to come out and promise to not renew the deal upon its completion next year.

Now, Sundar Pichai has gone even further to soothe the public, releasing his own version of Asimov's "Three Laws." Of course, Google is no stranger to the AI landscape. The company already leverages varying forms of AI in a number of its products, from Gmail and Photos to its salon-calling digital assistant and the waveform generating system that allows Assistant to speak. But can a company that unilaterally removed its own "Don't be evil" guiding principle from common practice really be trusted to do the right thing when raising artificial minds to maturity?

1. Be socially beneficial.

Pichai's first point seems one that everybody could easily get behind. I mean, what's the point of developing new technologies if they aren't going to serve the greater good of society as a whole? Unfortunately, Pichai doesn't go into any amount of appreciable detail in his post so it is difficult to suss out what this actually means.

"As we consider potential development and uses of AI technologies," he wrote, "we will take into account a broad range of social and economic factors, and will proceed where we believe that the overall likely benefits substantially exceed the foreseeable risks and downsides." That sounds great and all but who actually decides which risks outweigh which likely benefits? For example on Wednesday, the company's shareholders rejected a plan to increase the company's diversity levels, one which was broadly supported by Google's rank-and-file employees. Are we to believe that these same shareholders will act against their financial self interests in the name of avoiding societal pitfalls? The history of capitalism says no.

2. Avoid creating or reinforcing unfair bias.

"We will seek to avoid unjust impacts on people," Pichai wrote, "particularly those related to sensitive characteristics such as race, ethnicity, gender, nationality, income, sexual orientation, ability, and political or religious belief." If Google is capable of training an AI without imparting implicit bias, they are literally the only company to ever be able to do so. See: The Beauty.AI pageant bot, Microsoft's Tay, Microsoft's Zo, and seemingly every single facial recognition AI created to date.

3. Be built and tested for safety.

"We will design our AI systems to be appropriately cautious," Pichai wrote, "and seek to develop them in accordance with best practices in AI safety research." This is one promise Google can easily keep. Project Maven, for example, was licensed as open source, enabling a far greater degree of transparency than would otherwise be possible. What's more, Google is already involved with a number of AI research organizations including the Machine Intelligence Research Institute which focuses precisely on AI safety processes and the DeepMind Ethics & Society, which seeks to explore the ethics of AI. So long as Google remains committed to showing its work and not "moving fast and breaking stuff" I see little reason why the company can't remain a standard bearer for safe and reliable AI development.

4. Be accountable to people.

"Our AI technologies will be subject to appropriate human direction and control," Pichai wrote. Yeah, yeah, and all the dinosaurs in Jurassic Park were supposed to be female. The trouble with this assertion is, much like our nation's stockpile of nuclear weapons, it isn't the technology itself that's the issue, but rather the person with the authority to use it.

5. Incorporate privacy design principles.

Given how Facebook has fared following the Cambridge Analytica scandal, it only makes sense that Google would portray itself as a paragon of privacy. Unfortunately, the company's historical behavior doesn't do much to back up that assertion. Google is currently facing a class action lawsuit in the UK over its collection of personal user data through the Safari browser. US Senators are petitioning the FTC to investigate the company's collection of location data. And YouTube is taking heat for allegedly collecting data on children under 13.

6. Uphold high standards of scientific excellence.

"We aspire to high standards of scientific excellence as we work to progress AI development," Pichai wrote. "We will responsibly share AI knowledge by publishing educational materials, best practices, and research that enable more people to develop useful AI applications."

Like point three above, this should be a no-brainer for the company to uphold. Unlike more secretive companies like Apple or Facebook, Google has long been upfront with its research efforts. Google has opened AI research facilities throughout the UK and Canada and staffed them with academic researchers, teamed with marquee universities like UC Berkeley, Harvard and MIT, and even collaborated with other tech companies to develop ethical development guidelines as part of the Partnership on AI.

7. Be made available for uses that accord with these principles.

Essentially, Pichai is asserting that his company will only develop AIs that fit the ethical framework laid out in the first six points. He promises Google "will work to limit potentially harmful or abusive applications" of the machines that it builds based on the nature and primary use of the technology, its intended scale and the nature of Google's involvement in the project. He specifically cites autonomous weapons, surveillance AI, "technologies whose purpose contravenes widely accepted principles of international law and human rights," and those that "cause or are likely to cause overall harm." Of course, how the company will determine what constitutes an acceptable level of harm, much like point 1 above, remains to be seen.

Overall, sure, these are some great sounding axioms to live by. But what Pichai doesn't expound upon is whether any form of enforcement mechanism will exist or what sorts of penalties the company will incur should it violate these guidelines. Because no amount of mea culpas, apology tours or senatorial oversight committee appearances are going to suffice when a sentient AI slips its bounds and crushes humanity under the boot of a robot uprising.