Google Glass can hold its own when it comes to voice recognition and touch, but its current software doesn't account for gesture controls. OnTheGo Platforms, however, is looking to fix that. The folks at the Portland, Ore.-based company are baking up an SDK for developers to integrate gesture recognition in apps made for Glass and other Android-based smart glasses, such as the Vuzix M100. We went hands-on with a demo photo-snapping and gallery app to put the software through its paces.
OnTheGo Platforms Google Glass gesture recognition at CES
In its current form, the solution recognizes swipes from the left and right, a closed fist and an open hand. A fist aimed at Glass' camera will fire off a countdown for a snapshot or take you to the app's home, depending on the current screen. Waving a hand in either direction cycles through pictures in the gallery. This editor was tempted to swipe his hand across the camera's view quickly, but the software is tuned to pick up slower, more deliberate motions about a foot or so away. The detection was often hit or miss, but the developers say they're in the process of refining the recognition and that they've recently eliminated many false positives.
The sample application displayed lower-resolution video than we've come to expect from Google's wearable, but that won't be the norm with the team's development kit, as what we're glimpsing is just the footage that the software analyzes. In fact, a live video feed doesn't have to be displayed on Glass' prism for apps to take advantage of the code. The SDK still needs refining to live up to its full potential, but you can help the devs polish it by contacting them for access to a limited alpha. If you'd rather wait for a beta release, expect to lay hands on it in roughly three months.