Nearly every step wrought havoc upon the prototype walker's frame. Designed to activate landmines in the most direct means possible, the EOD robot was nevertheless persistent enough to pick itself back up after each explosion and hobble forth in search of more damage. It continued on until it could barely crawl, its broken metal belly scraping across the scorched earth as it dragged itself by a single remaining limb. The scene proved to be too much for those in attendance. The colonel in charge of the demonstration quickly put an end to the macabre display, reportedly unable to stand the scene before him. "This test, he charged, was inhumane," according to the Washington Post.
But how can this be? This was a machine, a mechanical device explicitly built to be blown up in a human's stead. We don't mourn the loss of toasters or coffeemakers beyond the inconvenience of their absence, so why should a gangly robotic hexapod generate any more consternation than a freshly squashed bug? It comes down, in part, to the mind's habit of anthropomorphizing inanimate objects. And it's this mental quirk that could be exactly what humanity needs to climb out of the uncanny valley and begin making emotional connections with the robots around us.
These sorts of emotional connections come more easily in military applications, where soldiers' lives depend on these devices working as they should. "They would say they were angry when a robot became disabled because it is an important tool, but then they would add 'poor little guy,' or they'd say they had a funeral for it," Dr. Julie Carpenter of the University of Washington wrote in 2013. "These robots are critical tools they maintain, rely on, and use daily. They are also tools that happen to move around and act as a stand-in for a team member, keeping Explosive Ordnance Disposal personnel at a safer distance from harm."
A US Army specialist sends an EOD robot towards an IED (Afghanistan, 2010) - Image: Reuters
"They were very clear it was a tool, but at the same time, patterns in their responses indicated they sometimes interacted with the robots in ways similar to a human or pet," Carpenter continues. These behaviors included naming the robots. And while the 22 soldiers that Carpenter interviewed for her study asserted that the destruction of these machines did not influence their decision-making, they did reportedly experience a range of emotion from anger and frustration to outright sadness. These military machines have very real value as their continued operation saves lives. But what about robots like the Anki Cozmo or the Sony AIBO, gadgets that serve the sole purpose of being sociable?
Dr. Kate Darling, a research specialist at the MIT Media Lab, defined a social robot as a "physically embodied, autonomous agent that communicates and interacts with humans on a social level." These robots "communicate through social cues, display adaptive learning behavior, and mimic various emotional states," which help them instigate far stronger emotional bonds from their users than non-social devices.
The reason behind this, Darling argued, is due to three factors: physicality, perceived autonomous movement and social behavior. Humans tend to gravitate toward physical objects versus visual representations like drawings or digital renderings. If that physical object is capable of moving on its own in a way that humans can't fully anticipate, we're more likely to interpret those motions as "intent" -- even if it's just your Roomba banging against walls or getting stuck under the couch again. However if the physical, self-propelled device is designed to trigger specific social cues, such as Buddy's large and expressive eyes, the effect on the user is even stronger because it mimics "cues that we automatically, even subconsciously associate with certain states of mind or feelings," Darling wrote.
"[What] we're seeing is that people treat Cozmo more like a pet, not in all aspects yet but in some very fundamental ways," explained Hanns Tappeiner, president and co-founder of Anki. "We definitely knew people were playing with Cozmo one-on-one but what we learned [since the robot's launch last October] is they also actually play with it around the dinner table almost like what you would do with a puppy."
These anthropomorphic tendencies enable social robots to manipulate their users to a certain degree. But rather than demand to be "fed" and "played with" like Tamagotchi, the '90's popular digital pets, used to, social robots today are proving to be effective surrogates in both education and health care. Zorabots, which is based on the NAO robotic platform from Softbank, help motivate senior citizens to complete their therapeutic exercises while the seal-shaped Paro robot serves as a stand-in for living pets for dementia patients.
"Some people are nervous about the fact that we're giving robots to old people because they think that we're replacing human care with technology." Darling told Engadget. "I'm not concerned in this case because I think that here, clearly, the robot is an animal therapy replacement and it works really, really well. It gives people that sense of nurturing something that they don't normally get to have because their life has been reduced to being cared for by others."
But don't expect robotics manufacturers to build human stand-ins any time soon. "It's too difficult to create a perfect human replica that behaves enough like a human that it doesn't disappoint your expectations when you interact with it," Darling argued. Instead, "we're going to see a lot of robots that draw more on animation techniques to mimic characters that we see as lifelike but that aren't trying to imitate something intimately familiar."
These emotional connections can become so strong that even simulated violence against their robotic companions triggers an overwhelming defensive response. Darling dubbed this sort of unidirectional attachment "the caregiver effect". Essentially, the robot's lifelike movements and visual cues cause people to emotionally project onto them, creating a sense of responsibility to provide the care and support that the robot appears to "need."
Although this effect can help people remain socially and emotionally engaged when they wouldn't otherwise, it can lead to deleterious effects as well. As Darling pointed out, this emotional connection could be leveraged by companies to extort money from their customers. Want the robotic pet that you've spent the last four years bonding with to keep working? Then you'd better pony up an extra $700 for this critical OS update.
"I'm a little bit worried about the seductive power of social robots in manipulating people" - Dr. Kate Darling, MIT Media Lab
Social robots, especially those that requires an internet connection to the company's server to drive their AI algorithms, may also pose serious privacy risks moving forward. We've already seen such problems arise within the current generation of social robots.
However, Darling believes that the issue can addressed through policy regulation. "We need more general consumer protection laws, she said. "We all have an incentive to want the Roomba to collect certain data because it makes the Roomba more useful. But this data being collected and ultimately sold is the tradeoff for that. I think we do need some protection because people are going to continue to opt in to this technology that provide such great services at the cost of their privacy."
Despite these potential pitfalls, the trend toward social robotics shows little sign of slowing as robots (social or otherwise) become more ubiquitous in our homes and workplaces. "Given the oftentimes positive effect of adding social aspects to robots in getting people to like them or want to engage with them," Darling said. "We're going to be adding social technology to a lot of robots that are in shared places, like the guard robot that fell into the fountain."
Tappeiner agreed. "We really strongly believe that personality and character will be found in [domestic robots] and is actually going to completely redefine how we interact with technology." For example, Tappeiner points out that you can yell and abuse digital assistants like Siri or Cortana as much as you like without any sort of repercussion, social or otherwise. "But if you are mean to Cozmo, he's going to get upset," Tappeiner said.
That sort of feedback will be essential to teaching humans to live harmoniously with their mechanical companions. Having AI stand up for itself is crucial. "That's going to be very important for people -- not just for kids but for people overall -- to figure out how to deal with technology."
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
Popular on Engadget
Google and Amazon approved home speaker apps that spied on users