MIT gestural computing makes multitouch look old hat
In this article: bidi, bidi screen, BidiScreen, camera, computer input, ComputerInput, embedded, embedded optical sensor, EmbeddedOpticalSensor, gestural computing, gestural interface, GesturalComputing, GesturalInterface, gesture, gestures, input, media lab, MediaLab, minority report, MinorityReport, mit, mit media lab, MitMediaLab, motion, motion sensing, MotionSensing, optical, optical sensor, OpticalSensor, research
Big Bird's illegitimate progeny, augmented reality projects aplenty, and now three-dimensional gestural computing. The new bi-directional display being demoed by the Cambridge-based boffins performs both multitouch functions that we're familiar with and hand movement recognition in the space in front of the screen -- which we're also familiar with, but mostly from the movies. The gestural motion tracking is done via embedded optical sensors behind the display, which are allowed to see what you're doing by the LCD alternating rapidly (invisible to the human eye, but probably not to human pedantry) between what it's displaying to the viewer and a pattern for the camera array. This differs from projects like Natal, which have the camera offset from the display and therefore cannot work at short distances, but if you want even more detail, you'll find it in the informative video after the break.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.