Google and Alphabet CEO Sundar Pichai takes his sweet time getting to the point in a new Financial Times editorial. But when he gets there, he leaves little room for interpretation: "...there is no question in my mind that artificial intelligence needs to be regulated. It is too important not to."
After laying out his relationship with technology and offering a few examples where innovation has had unintended negative consequences, Pichai makes the case that while AI is powerful and useful, we must balance its "potential harms... with social opportunities." Of course, this call for "balance" leaves some questions about how tight the regulation is that Pichai is talking about. He doesn't specifically rebuff the White House's recent calls for a light touch. Nor does he suggest the EU's more comprehensive proposals go too far.
Instead he makes clear that having the international community come to an agreement on regulatory issues is key. Then he seems to suggest that Alphabet's own internal handling of AI could serve as a guideline. He claims that the rules and systems put in place by the company help it avoid bias, and prioritize the safety and privacy of people -- though it is debatable how successful Alphabet has been on those fronts. He also says the company will not deploy AI "to support mass surveillance or violate human rights." And while Google does not sell facial recognition software that could easily be abused (unlike some of its competitors), there is serious concern that Google and its ilk pose a broad threat to human rights.
One point Pichai makes that is undeniable, though, is that "principles that remain on paper are meaningless." There's little question at this point that AI needs to be regulated. But just as important as codifying those rules is having a regulatory body with the authority and power to enforce them. If there are not significant consequences for companies that flaunt the rules or bad actors who abuse those tools, then they make little difference.