Microsoft's image-recognition AI is a stickler for the details

Computer scientists have been modeling networks after the human brain and teaching machines to think independently for years, completing tasks like document reading and speech cues. Image recognition is another useful chore for the neural networks, and Microsoft Research has just offered a peek at its recent dive into the matter. Project Adam is one of those deep-learning systems that's been taught to complete image-recognition tasks 50 times faster and twice as accurate as its predecessors. So, what does that mean? Well, instead of just determining the breed in a canine snapshot, the tech can also distinguish between American and English Cocker Spaniels. The team is looking into tacking on speech and text recognition as well, so your next virtual assistant may not only wrangle your schedule and commute, but also could constantly learn from the world that you live in.

What's more, the network is said to pack enough muscle to serve up accurate nutrition info instantly based on a smartphone photo of your plate. Based on the amount of data and training images, Project Adam's deep learning builds a hierarchy that enables it to parse through tens of thousands of categories. "It automatically learns how to extract features from these images, so that when you show it an image that it has never seen before, it can accurately categorize it in one of the categories that you've already taught it," says Partner Research Manager Trishul Chilimbi. The system's neurons are employed to examine small portions of pictures rather than the entire thing. This allows for a more detailed breakdown of characteristics like facial recognition, textures and more (or breed specifics) for increased levels classification. As is the case with most research projects, there's no clear indication as to when (or if) we'll be able to make use of the tech.