Researchers developed an AI backpack system to guide vision-impaired wearers

The system uses a 4K spatial camera and Intel tech for image processing.

Jagadish K. Mahendran, Institute for Artificial Intelligence, University of Georgia

Researchers at the University of Georgia have developed a backpack system to help vision-impaired wearers understand and navigate their surroundings. The backpack uses a Luxonis OAK-D spatial camera, which has an on-chip edge AI processor and uses Intel's Movidius image processing tech.

The 4K camera, which captures depth information as well as color images, is packed inside a vest or fanny pack. The system uses Intel's OpenVINO toolkit for inferencing and it can run for up to eight hours, using a pocket-sized battery housed in the fanny pack. The backpack holds a lightweight computing device with a GPS unit.

The researchers say their system can detect obstacles (include overhead ones) and tell the wearer where they are through audio prompts. It can also read traffic signs and identify changes in elevation. It can, for instance, inform the wearer that there's a stop sign by a crosswalk or let them know when there's a curb in front of them.

A Bluetooth earpiece allows the wearers to control the system with their voice. They can ask it to describe the surroundings or save GPS locations with a specific name.

The researchers plan to open source the project. They suggest that the system is unobtrusive and wouldn't attract attention while using it in public. The downside is having to carry a backpack everywhere. Perhaps in the not-too-distant future, researchers will figure out a way to pack this kind of tech into a pair of smart glasses.