Harvard and MIT researchers working to simulate the visual cortex to give computers true sight
Harvard and MIT have banded together to basically "reverse engineer" the human brain's ability to process visual data into usable information. However, instead of testing one processing model at a time, they're using a screening technique borrowed from molecular biology to test a range of thousands of models up against particular object recognition tasks. To get the computational juice to accomplish this feat, they've been relying heavily on GPUs, saying the off-the-shelf parallel computing setup they've got gives them hundred-fold speed improvements over conventional methods. So far they claim their results are besting "state-of-the-art computer vision systems" (which, if iPhoto's skills are any indication, wouldn't take much), and they hope to not only improve tasks such as face recognition, object recognition and gesture tracking, but also to apply their knowledge back into a better understanding of the brain's mysterious machinations. A delicious cycle! There's a video overview of their approach after the break.
All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.