Making sense of motion interfaces gestures, tk and tk at Engadget Expand

We're still big fans of Douglas Engelbart's original pointing device, but human/computer input is moving past traditional peripherals. We're rapidly approaching a future of touchscreens, motion sensors and visual imaging control solutions.

"Gone are the days, probably, of the keyboard, mouse and maybe even touch input," Samsung's Shoneel Kolhatkar told us. During a panel on the future of gesture and motion controls at Expand NY, Kolhatkar suggested that these technologies could fade away within the next 20 years. His fellow panelists, Pelican Imaging's Paul Gallagher and Leap Motion's Avinash Dabir agree that there's more to the future of computing than the traditional point and click.

"I have a 4-year-old niece," mused Dabir, Leap Motion's director of developer relations. "She's so used to swiping things that she tries to swipe open our screen door." He's underlining Kolhatkar's point: The coming generation isn't weighed down by our attachment to physical PC control peripherals; they're growing up with touch-, voice- and gesture-controlled interfaces, and it comes naturally. As for the rest of us? "We'll get used to it," he says. "We'll learn."

In a way, the panel explained, it makes sense -- motion already plays a hand in human communication. A child's swipe at a door or a person's conversational gesturing are examples of how a motion-controlled future can tie naturally into how we already interact with the world. For a future of motion control to work, interaction needs to be predictive and natural. "The gestures need to be simple, efficient and immersive," explained Kolhatkar. "We're used to going through navigation and menu systems, but with these alternate gestures, we're finding that boundaries are getting really thin." Technologies like Samsung Smart Scroll and Smart Pause exemplify this: devices that simply stop playing video when they know you aren't looking at the screen.

"What we're talking about is how to have a system interact with you and be able to understand your intent," Gallagher explained. Devices need to get smarter and recognize when you're talking to them just like a person knows when they're being addressed by body language or eye contact. Leap Motion's Dabir describes it as having the software meet you halfway. "If I want to point to something and close it, why can't it come to me and anticipate my intentions?"

Even so, these dreams of highly intelligent and predictive gesture-control interfaces are pretty far out. "Everybody's seen Minority Report," Gallagher said, recalling Tom Cruise's floating computer interface and his sweeping gestures, "but try doing that for an hour!" For now, the industry is focusing on specialized use cases: providing an interface for surgeons who need to access files in the middle of an operation, but don't want to break sterility, or for workers on oil rigs who can't risk dirtying electronics with the natural grime that comes with their jobs. Dabir hopes that the use of gesture controls will trickle down from these special cases, eventually finding a home in consumer devices.

"There are certainly use cases out there where it can be used," Dabir concluded. "I think it'll start in these specialized use cases and trickle down into everyday workflows," eventually finding a home in how consumers interact with computers on a daily basis. It's still early, the panel agreed, but the future of motion control is in the hands of today's developers. "They help define what these gestures are," Dabir clarified. "The best ones trickle to the top and help set the standards. We still need time to figure out what works and what doesn't."

Live from Expand: Rethinking Education