Five questions for the man who created a robot documentarian

Sponsored Links

Five questions for the man who created a robot documentarian

We've spilled buckets of digital ink on headless horse bots, uncanny humanoids and the coming of the robot apocalypse, but there's a softer, more emotional side to these machines. Social robots, as they're referred to, are less mechanized overlords and more emotional-support automatons, providing companionship as well as utility. Robots like these are forcing us to consider how we interact with the technology that we've created.

Under the direction of artist/roboticist Alexander Reben and filmmaker Brent Hoff, a fleet of precious, cardboard BlabDroids, set out to explore the shifting boundaries of human-robot interaction. These tiny, wheeled machines aren't automated playthings, but serious documentarians seeking an answer to a deceptively simple question: "Can you have a meaningful interaction with a machine?" We'll dive deeper into the topic at Expand this weekend, but in the meantime, here's a short Q&A with Reben on an incredibly complex topic.

Turn on browser notifications to receive breaking news alerts from Engadget
You can disable notifications at any time in your settings menu.
Not now

Onlookers often assigned human attributes to Reben's Two Mylar Balloons installation

Why are humans so fascinated with robots?
I think the fascination comes from the way we see ourselves in machines. Even if a robot does not appear to look like us, we still project our emotions and agency onto them. This is nothing new, if we look to loving a teddy bear or yelling at a car that has broken down. When things behave in a manner, which is not understood to us, we tend to assign that behavior a meaning to try to understand it. This tendency leads us to assign complicated meanings to even the simplest of machines. Robots seem magical, as they appear to exhibit attributes, which we tend to think of only living things having.

Can they fulfill human needs that other humans can't?

With the rise of social machines, there are some tasks, which robots may do better but not necessarily anything a person can't do.
Robots can certainly fulfill physical needs, as they can simply do tasks which people can't. Things that are dangerous, dirty or dull have traditionally been mechanized. With the rise of social machines, there are some tasks, which robots may do better, but not necessarily anything a person can't do. For instance, a recent study showed people feel less embarrassed to describe their medical problems to a robot or avatar rather than to another person, thereby describing more detail. This may lead to more conditions being caught sooner, leading to more lives saved.

What are the limits (if any) to human-robot interaction?
Right now, I believe the limits in social robots to be in software intelligence. Currently, a difficult problem is emotional recognition. It is a non-trivial problem for a robot to interpret the emotional state of a person. Even for people, let alone robots, things like sarcasm are sometimes hard to detect. I guess a more philosophical question, in regards to limits, is if a robot could ever really love.

How do we decide what is uniquely human? What is uniquely robotic?
Right now, robots and their functionality are a mirror of humanity.
I tend not to think of technology in general as something outside of humanity. We create most technology to enrich our lives in some way. Technology done right, and by extension robotics, has the quality of allowing us to be more human. Right now, robots and their functionality are a mirror of humanity.

What are the biggest ethical issues facing roboticists today?
For social robots, I think the biggest issue is not being able to foresee every individual contingency. Going back to the limits of emotional recognition, a robot may say or do something psychologically damaging to a person. This may occur even if there was not intent of malice. The more obvious issues involve robots, which literally take people's lives into their "hands." Examples include self-targeting and shooting drones and self-driving cars. The drones are obvious, but in the case of cars, there could be situations when it may decide to kill you by swerving off a bridge rather than striking pedestrians running across the road. This is not a new problem however, as philosophers have been pondering the ethics of the "trolley problem" for quite some time.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission. All prices are correct at the time of publishing.
View All Comments
Five questions for the man who created a robot documentarian