It seems so obvious: Using Kinect to help people learn American Sign Language. That's exactly what researchers at Georgia Tech College of Computing are working on, pairing Microsoft's oft-hacked motion sensing camera with custom software that previously required colored gloves kitted with wrist-mounted, 3-axis accelerometers. On a series of increasingly difficult tests, the software returned results with 100% accuracy, 99.98% accuracy, and 98.8% accuracy.

These promising results mean the team will be working on updates including a larger vocabulary which necessitates the need of "hand shape features." The initial proof-of-concept demo launched with a small vocabulary that excluded them in favor of broader gestural movements with the arms and body. We imagine that reported fourfold increase in Kinect image resolution would have a major benefit here, should Microsoft ever release it.

This article was originally published on Joystiq.