People have been trying to sell us 3D this
and 3D that
for ages, but for the most part it's always been the same flat surface we're looking at and poking with our fingers
. Some restless souls in Japan, however -- including Engadget
's very own Kentaro Fukuchi
-- have begun developing a way for computers to recognize a person's interactions with real objects and to respond accordingly. The essence of this new technique is to use translucent rubbery objects, whose diffraction of specially polarized light is picked up by a camera. Thus, relatively subtle actions like squeezing and stretching can be picked up by the different light results produced. Still in the early stages of design, the system is hoped to assist in surgery training, though we've got video of its more fun potential uses after the break.
[via New Scientist