Advertisement

Getting Human-Computer Interaction Down to the Metal

While the concept of virtual reality has been around for decades — and the scattered remains of its previous failures litter the garages of collectors and burned enthusiasts — it has only recently become a legitimate way forward in the tech world.

With largely positive responses to Oculus Rift and Gear VR, VR's future is finally guaranteed. Investors and analysts are also bullish: In the past five years, VR companies have raised $746 million in VC funding, with investment jumping by 50 percent in the first half of 2016. And it's anticipated that VR will overtake TV within 10 years.

Despite VR's advancements, the interaction between man and machine still isn't seamless because of cumbersome interactions like arm movements, head nods, and two-dimensional landscapes. To truly allow VR technology to thrive, VR manufacturers need to get closer to the metal.

Where Current Human-Computer Interaction Falls

If you're a coder — or you've worked with coders long enough — you've probably heard the phrase "down to the metal." It refers to code that writes directly to the hardware, cutting out the middlemen of the operating system and device drivers. Coding down to the metal increases performance because nothing gets in the way.

In the world of human-computer interaction, the ideal interaction is one that's down to the metal of the human brain — turning thought into action with as few intermediaries as possible.

One of the reasons the iPhone's all-touchscreen interface was such a transformative success was because finger-to-glass is significantly closer to the metal than the traditional mouse-and-keyboard of a desktop or the rollerball of the BlackBerry. The glass slab presented a more natural way to interact with machines than most consumers had ever really experienced in a personal device.

But while touch works well enough for phones in their current form factor, it's not without problems. Traditional computers still haven't solved the "gorilla arm issue," and the small screen real estate on today's wearables makes touch cumbersome. This, combined with the apparently fading popularity of tablets, indicates that the future of computing may not involve a touchscreen after all.

Voice has stepped in to tackle some of these issues, with Cortana and Siri finally making the leap to the desktop. The Amazon Echo and Google Home also use voice, acting as virtual assistants that don't require you to stop moving to set tasks, call someone, or check your schedule. But this method of interaction is still fairly limited, and there are legitimate security concerns in having an open microphone act as the primary interface to a personal device.

What this means for the world of HCI is that while the mouse, the touchscreen, and the voice all have their strengths, none of these has quite managed to seamlessly translate the natural way we think and act.

Picking HCI Back Up

VR is already completely changing the way some industries work — not just in the world of gaming, but also in areas such as healthcare and education. But its advances in HCI are far larger and more all-encompassing than any changes it can make to individual industries.

Despite its outwardly cumbersome appearance, VR represents one of the most natural ways a person can interact with a computer. It presents the world in the shape that people see it, adding interactive depth to HCI in a way that goes beyond the gimmicks of 3D TV. Users will be able to move about interfaces and walk into virtual landscapes to explore. For people without depth perception, the addition of depth could mean seeing whole shapes they didn't know existed. Both augmented and mixed reality double down on this notion, placing the computer interface within the physical world.

Haptics, the ability to add tactile sensation to virtual reality, offers another piece to this puzzle. While many of the VR headsets sold today are tethered to traditional controls or wands, haptic controls exist for virtually every body part, which, along with body tracking, could be enough to completely blur the line between VR and just plain "R." If you're playing a VR game as a knight, imagine feeling the weight of the sword in your hand with haptics technology backing your VR system.

And last, eye-interaction technology can bridge the chasm between a virtual experience and the user. It frees users' hands for any additional HCI components or to simply relax. The eye-brain connection and the speed of the eyes, moving between targets in tens of milliseconds, provides HCI that's as close to being down to the metal of the human brain as we will come until we have direct BCI. For now, unless you're willing to have surgery to implant electrodes into your brain, HCI is where it's at. Eye interaction allows you to think and look, whereupon your intent can be translated into action through a mix of purposeful and nonpurposeful eye movements.

And eye interaction helps better translate a user's feelings and perspective, not just between a person and a computer, but between other people as well. It brings in a direct, emotional, tangible understanding that when you're looking at an avatar in a virtual world and the avatar looks at you, it's truly the person on the other end doing the looking.

Whether VR takes off in the mainstream this year, next year, or in five years, the advances it's making in HCI are already being felt. From the taptic engine in the Apple Watch to the gesture-based controls of the Magic Leap, haptic feedback and body tracking are being implemented and experimented with outside the VR space. And with products like Google's Cardboard continuing to develop, efforts to add depth to the mobile experience are already underway without requiring expensive hardware.

While many of these new methods of HCI are still in their early stages, they've made the direction we're moving in very clear: The gap between computer and humans is shrinking. Soon, our interactions with machines will no longer require a learning curve. Instead, they'll be naturally, unmistakably human.

Jim Marggraff is the founder and CEO of Eyefluence, a company that is engineering the ability to transform intent into action through your eyes.

Co-written by, Robert Rohm, a Quality Assurance Engineer at Eyefluence.