It all started with a simple, if petulant demand I'd made during a planning meeting for November's Expand 2014 in New York City, our third tech-meets-entertainment spectacle. "I want dancing robots!" I said, doing my best Veruca Salt. Months later, I was sitting front and center in a sprawling convention center watching three humanoid robots do a choreographed dance routine for a crowd of slack-jawed spectators.
The performers, Aldebaran's toddler-sized Nao bots, showed off their 25 degrees of freedom in an unfortunately out-of-sync dance routine, while real-life toddlers crowded the edges of the stage slapping their little hands to the beat. The children, as one might imagine, weren't particularly concerned with the lack of coordination. And it seemed neither were the adults, as a sizable crowd had gathered to revel in the awesomeness that is a robot dance party. Meanwhile, I sat mere yards away feeling equal parts awe and embarrassment for the troupe. I was feeling for these machines. I was falling in love.
It was only natural, then, that these dancing robots would combine with my first love, cocktails, when we started planning for our January CES 2015 stage show, Engadget After Hours
. The plan was simple: A local bartender would join us on stage each night to supply us with ample amounts of booze while we ran down the weird, wild and sometimes ridiculous products at the biggest tech show on Earth. Nao would also be on hand to lend a little robotic flair.
Aside from a day-one technical hiccup, Nao delivered on its promises: It introduced myself and my co-host, Engadget Editor-in-Chief Michael Gorman; it read off the ingredients of our nightly cocktail; and even threw a little impromptu shade our way. It hit its cues, clapped when prompted and kept quiet when it was time for the flesh-and-blood stars to shine.
Nao had another man in its life, one that literally controlled everything it did.
For all intents and purposes, Nao was the perfect live TV companion. But I couldn't get over one thing: Just a couple of yards away, staring me straight in the eyes, was the human puppet master pulling Nao's digital strings, a developer from Aldebaran with a clunky, black laptop. Nao had another man in its life, one that literally controlled everything it did. As the reality that Nao wasn't operating independently set in, my feeling of affection began to fade. I no longer saw Nao as the multi-talented, technological wonder that had elicited warm emotions from me. Instead, I saw a cold, hard machine; something akin to the giant animatronic members of Chuck E. Cheese's jug band. I knew Nao was capable of more sophisticated forms of human-robot interaction, but I felt cheated.
Why was I having flashbacks to Ashlee Simpson's failed backing track during a very live SNL performance
? Why did I feel like someone had just pulled the curtain back on the Wizard of Oz?
When we invited Nao to co-host Engadget After Hours during CES, I'd set aside rationality, subconsciously expecting to share the stage with a robot capable of human-level interaction. To better understand what I was feeling, I reached out to artist and roboticist, Alexander Reben. As he pointed out, I was seeking the Holy Grail of artificial intelligence. I wanted Nao to pass the Turing test.
The Turing test derives its name from Alan Turing, often referred to as the father of modern computing and the subject of the Oscar-winning film, The Imitation Game. In a 1950 paper, Turing proposed a test by the same name in which a human judge is tasked with distinguishing between a computer and a human in an effort to measure the computer's ability to think like us.
As it turns out, Nao's inability to keep pace in repartee says more about the state of artificial intelligence than the limitations of a single machine. Google, Apple, Microsoft and Amazon have all very visibly thrown their hats into the AI ring, with Google Now, Siri, Cortana and Echo, respectively. Meanwhile, engineers and researchers the world over continue to push for more sophisticated machines. And, yet, no one has invented a single device or piece of software that can pass the Turing test.
At CES, I saw the man behind the curtain; the crowd saw a sassy robot engaged in banter with a pair of humans.
But it didn't take human-level intelligence to get me emotionally invested in Nao. In fact my disparate experiences can be summed up with the tired "man behind the curtain" metaphor. I'd been a mere spectator of a dance performance months before, and aside from seeing a human setting the robots up on stage, I'd had no concept of what happened behind the scenes. But at CES, I was made keenly aware of how the robotic sausage was made. In that instance, I saw the man pulling the strings and feeding my co-host lines of code to make sure it stayed on cue.
"Once you know that, the magic's been lost," Reben said. "In itself, it's no longer a thing that could possibly be alive. You can't suspend disbelief anymore, because it's been ruined."
It's all a matter of perception. Which explains why my experience differed so greatly from that of the audience and my colleagues. At CES, I saw the man behind the curtain; the crowd saw a sassy robot engaged in banter with a pair of humans. Despite its limitations and my overblown expectations, however, Nao is capable of far more sophisticated forms of communication than canned sound bites.
Heather Knight, a seasoned roboticist and Ph.D. student at Carnegie Mellon's Robotics Institute, runs an art-meets-technology outfit called MarilynMonrobot, which focuses on "socially intelligent robot performances and sensor-based electronic art." She also worked at Aldebaran, Nao's birthplace, on sensor design. In 2010, Knight introduced Data, a Nao bot rebranded as a stand-up comedian, during a TEDWomen talk. Using software designed by Knight and fellow CMU alumni, in addition to its built-in microphone and camera, Data was a much more complex artificial being than the Nao that I knew. It took visual and auditory cues from its audience and tailored its act to fit the room.
... The limitations of modern AI and Nao's complexities simultaneously came into focus.
It wasn't a flawless performance. Data talked through applause; it bombed on a handful of jokes; and the ones that hit were canned, comedy club clichés. But, while watching a taping of the performance
, I started to feel the same sense of sympathy I'd felt months earlier when the Nao dance troupe fell out of step at Expand. This was a complex machine, performing complex tasks, but more than that, it was an imperfect humanoid performer. With Data acting largely independent of its human keeper, the feeling that it was merely a modern-day equivalent of Teddy Ruxpin was starting to dissipate. I was ready to give robot love another try.
A month after our two-night rendezvous in Vegas, I was again sitting face to face with the robot that had at once piqued my curiosity and destroyed my fantasies. It stood on a conference table in Engadget's San Francisco offices next to two representatives from Aldebaran. They were there to discuss the possibility of making Engadget After Hours
a weekly show. As we talked through the best ways to incorporate Nao into our act, the limitations of modern AI and Nao's complexities simultaneously came into focus.
While Aldebaran intends to make Nao your future robot companion, it's still the plaything of developers and institutions. Getting Nao to perform specific tasks, especially on-demand for a live studio audience, requires a bit of digital hand-holding. In the case of our late night show, we were again faced with a Wizard of Oz
scenario, a possibility that, after weeks of soul searching, no longer disappointed me. I knew what Nao was capable of.
As the meeting came to a close, one of Aldebaran's reps asked if we cared for a performance. He turned Nao to face him, and repeated the words "Eagle Dance" until it recognized the command. As it lifted its leg and spread its arms wide like a ballerina donning a coat of armor, I was reminded of why I'd fallen in love months before.