Google uses computer vision and machine learning to index your photos

Sponsored Links

Google uses computer vision and machine learning to index your photos

Tags are so 2008. Google doesn't want you to waste time tagging your photos, except for the people in them. The web giant wants to be able to recognize more abstract concepts like "sunset" or "beach" automatically and attach that metadata without further input. In yet another post-I/O update, Google+ photos now uses computer vision and machine learning to identify objects and settings in your uploaded snapshots. You can simply search for "my photos of trees" or "Tim's photos of bikes" and get surprisingly accurate results, with nary a manually added tag in sight. You can perform the searches in Google+, obviously, but you can also execute your query from the standard Google search page. It's pretty neat, but sadly Mountain View seems to have forgotten what cats look like.

Google uses computer vision and machine learning to index your photos

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
Popular on Engadget