Advertisement

This year we took small, important steps toward the Singularity

But can we learn to get along with our robotic pals of tomorrow?

Getty Images/iStockphoto

We won't have to wait until 2019 for our Blade Runner future, mostly because artificially intelligent robots already walk, roll and occasionally backflip among us. They're on our streets and in our stores. Some have wagged their way into our hearts while others have taken a more literal route. Both in civilian life and the military battlespace, AI is adopting physical form to multiply the capabilities of the humans it serves. As robots gain ubiquity, friction between these bolt buckets and we meat sacks is sure to cause issues. So how do we ensure that the increasingly intelligent machines we design share our ethical values while minimizing human-robot conflict? Sit down, Mr. Asimov.

In the last year, we've seen Google form the DeepMind Ethics & Society to investigate the implications of its AI in society, and we've witnessed the rise of intelligent sex dolls. We've had to take a deep look at whether the warbots we're developing will actually comply with our commands and whether tomorrow's robo-surgeons will honor the Hippocratic Oath. So it's not to say that such restrictions can't be hard-coded into an AI operating system, just that additional nuance is needed, especially as 2018 will see AI reach deeper into our everyday lives.

Asimov's famous three laws of robotics is "a wonderful literary vehicle but not a pragmatic way to design robotic systems," said Dr. Ron Arkin, Regents' professor and director of Mobile Robot Laboratory at the Georgia Institute of Technology. Envisioned in 1942, when the state of robotics was rudimentary at best, the laws were too rigid for use in 2017.

During his work with the Army Research Office, Arkin's team strived to develop an ethical robot architecture -- a software system that guided robots' behavior on the battlefield. "In this case, we looked at how a robotic software system can remain within the prescribed limits extracted from international humanitarian law," Arkin said.

"We do this in very narrow confines," Arkin continued. "We make no claims these kinds of systems are substitutes for human moral reasoning in a broader sense, but rather we can give the same guidelines -- in a different format, obviously -- that you would give for a human warfighter when instructed how to engage with the enemy, to a robotic system."

Specifically, the context of these instructions is dictated by us. "A human being is given the constraints, and restraints, if you will, for the robotic system to adhere to," he said. It's not simply a matter of what to shoot at, Arkin explained, but whether to shoot at all. "There are certain prohibitions that must be satisfied," Arkin said, so that "if it finds itself near cultural property which should not be destroyed, or if that individual or target is near civilian property like a mosque or a school, it should not initiate in those circumstances."

This "boundary morality," as Arkin puts it, likely won't be enough for robots and drones to replace human warfighters, and certainly not next year. But in certain scenarios, such as clearing buildings or counter-sniper operations, where collateral damage is common, "put a robot in that situation and give it suitable guidance to perhaps do better, ultimately, than a given warfighter could," Arkin concluded.

In these narrowly-defined operations, it is possible to have a three-laws-like sense of ethics in an AI operating system. "The constraints are hard-coded," Arkin explained, "just like the Geneva Conventions say what is acceptable and what is not acceptable."

Machine-learning techniques may empower future AI systems to play an expanded role on the battlefield, though they are themselves not without risk. "There are some cases of machine-learning which I believe should not be used in the battlefield," Arkin said. "One is the in-the-field target designation where the system figures out who and what it should engage with under different circumstances." This level of independence is not one that we are currently ethically or technologically equipped to handle and should instead be vetted first by a human-in-the-loop "even at the potential expense of the mission. The rules of engagement don't change during the action."

"I believe that if we are going to be foolish enough to continue killing each other in warfare that we must find ways to better protect noncombatants. And I believe that this is one possible way to do that," Arkin concluded.

While 2017 saw the rise in interactions between robots and humans in the supermarket -- looking at you, Amazon Go. In the coming year, care must still be taken to avoid potential conflict. "These robots, as they actuate in the physical space, they'll encounter more human bodies," said Manuela Veloso, Professor at Carnegie Mellon's School of Computer Science and head of CMU SCS's Machine Learning Department. "It's similar to autonomous cars and how they'll interact with people: robots will eventually need to have to make ethical decisions." We're already seeing robots encroach on production lines and fulfillment centers. This sense of caution will be especially necessary when it comes to deciding who to run over.

And, unlike military applications, civil society has many more subtle nuances guiding social mores, making machine learning techniques a more realistic option. Veloso states, "Machine learning has a much higher probability of handling the complexity of the spectrum of things that may be encountered," but that "it probably will be a complement of both."

In this way, fundamental social rules -- such as no biting, no shouting, et cetera -- can be hard-coded into the AI while machine learning can help guide the AI through its day-to-day tasks. "Machine learning has a very beautiful kind of promise -- in some sense humans, they are not as good in terms of explaining everything they care about in terms of actually rules and statements," Veloso added. "But they do reveal themselves how they act by example."

Like Arkin, Veloso doesn't exactly think we'll be handing robots the keys to the kingdom next year. "AI systems for a long time should be assistants, they should be recommenders," she said. And we've already witnessed that trend in 2017, with digital assistants moving from our phones to our homes. It's one that will very likely continue into the new year. But a long time doesn't mean never. "These AI systems have the potential to be great other 'people'," Veloso continued. "Great other minds, data processors and advisors." Just maybe don't give them guns just yet.

Dr. David Hanson of Hanson Robotics presents the Sophia Robot

Humans will have responsibilities towards their mechanical counterparts as well, specifically treating them with respect. Now, whether robots -- especially anthropomorphic ones like Hanson Robotics' Sophia, which debuted this year -- "deserve" respect anymore than your Keurig or Echo do is a slippery ethical slope that only Chidi would relish sliding down. But social standards on acceptable behavior are constantly in flux, and this something that needs codifying in 2018.

"We feel responsible to not hurt dogs and cats," Veloso explained. "I don't think that [robots] will have 'feelings' like a dog or a cat does. I think that it's probably that people have to get used to appreciate the function, like you're not going to kick your refrigerator or disconnect your toaster" when they don't function properly.

"I believe that if we don't make these robots look a lot like people -- with skin and everything -- people will always treat them as machines," Veloso concluded. "Which they are."

Our relationship with technology, especially AI systems that approach (and will eventually exceed) human intelligence are changing whether we like it or not. For example, we've already seen Google's AlphaGo AI beat the pants off of human masters repeatedly this year. We're not likely to see America's military rolling out autonomous smart tanks and Terminator-style battle robots within the next two decades, let alone 12 months, Arkin estimates.

The US Army is already reaching out to industry for help in designing and deploying machine learning and AI systems to counter foreign cyberattacks, the first results of which will begin rolling out next year. In the immediate future beyond that, we're likely to see a slate of smart technologies, from self-guided helicopters to in-the-field part printing (assuming Elon Musk doesn't get his way). The state of the art for battlefield AI is simply too far within its infancy to reliably deploy such technology. Instead, that change will likely be driven by civilian society.

"I think humans are amazing in the sense of being extremely open minded with respect to technology," Veloso said. "Look at the world in which we live versus the world in which our grandparents lived. The amount of technology we are surrounded by is absolutely fascinating. We aren't taught in school anything different from what our grandparents were taught in school: it's history and geometry and algebra and we still manage to live with so much more technology because humans are so smart."

Hopefully we'll prove smart enough to treat tomorrow's robots better than we treat each other today.

Check out all of Engadget's year-in-review coverage right here.

Image: VCG/VCG via Getty Images (Sophia robot)