Advertisement

Fingertip sensor lets robots 'see' what they're touching

We've seen robotics improve by (literal) leaps and bounds recently, but what about more nuanced things, like a fine sense of touch? Researchers at MIT and Northeastern University are showing off a new fingertip version of the GelSight sensor, a cube-shaped attachment that uses a camera and a sensitive rubber film to 3D map the objects they're grabbing. That new level of precision, the team says, could lead to more independent robots that are better able to manipulate their environment.

In the team's demo (shown in the video above), a Baxter robot from Rethink Robotics uses its standard sensors to grab a dangling USB cord. At that point, the GelSight sensor attached to the robot's two-pronged hand susses out the finer details, specifically the raised USB logo embossed on one side of the plug. The sensor's cube-shaped housing features a thin rubber film covering one side. That layer conforms to whatever is being pressed against it, while multicolor LEDs bounce light off the resulting bumps and ridges. A camera then uses that data to build a 3D depth map of the object. Using what it knows about USB connector design, the system can then position the plug accurately enough to place it in an adapter plugged into a power strip below.

While the sensor isn't quite as accurate as earlier, larger iterations of the GelSight tech, the team says the fingertip-mounted version still demonstrates about 100 times more sensitivity than a human finger.

[Image credit: Melanie Gonick/MIT]