Latest in Gear

Image credit: Engadget

Google trained its AI camera with help from pro photographers

There's a lot more to Clips than good focus and the rule of thirds.
Matt Brian, @m4tt
01.26.18 in Cameras
234 Shares
Share
Tweet
Share
Save

Sponsored Links

Engadget

When Google unveiled its $249 Clips camera back in October 2017, it was easy to question Google's motives. Lifelogging cameras weren't a new idea, nor were they particularly successful, and given the rise in smartphone imaging and video quality, it was a tough ask to let a wearable camera automatically capture important moments.

With Clips expected to debut in the coming weeks, Google has penned a blog post (first detailed by The Verge) detailing how it's trained its algorithms to identify the best shots. In order to do that, its AI needed to learn from something or someone, so Google called in photography experts from various different backgrounds and supplied their model with some of the best photography available.

"We ended up discovering—through trial and error and a healthy dose of luck—a treasure trove of expertise in the form of a documentary filmmaker, a photojournalist, and a fine arts photographer," said Josh Lovejoy, Senior Interaction Designer at Google. "Together, we began gathering footage from people on the team and trying to answer the question, 'What makes a memorable moment?'"

Some of that learning comes down to principles that you may have learnt as you've struggled to get to grips with a new smartphone camera or point-and-shoot. Understanding focus, particularly depth of field, and the rule of thirds are key, but so are some more "common sense" suggestions. Everybody knows to keep fingers out of the shot and to not make quick movements, but machine learning algorithms have no such understanding.

"We needed to train models on what bad looked like," said Lovejoy. "By ruling out the stuff the camera wouldn't need to waste energy processing (because no one would find value in it), the overall baseline quality of captured clips rose significantly."

Google admits that while it trained its AI to appreciate "stability, sharpness, and framing," Clips won't always get it right. It can ensure that it's framed a shot well and has a family member in focus, but it won't know that the big shiny ring on someone's finger is what everyone will want to see.

"Success with Clips isn't just about keeps, deletes, clicks, and edits (though those are important)," Lovejoy notes. "It's about authorship, co-learning, and adaptation over time. We really hope users go out and play with it."

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you buy something through one of these links, we may earn an affiliate commission.
Comment
Comments
Share
234 Shares
Share
Tweet
Share
Save

Popular on Engadget

Engadget's Guide to Privacy

Engadget's Guide to Privacy

View
‘Terminator: Resistance’ will pit you against Skynet this December

‘Terminator: Resistance’ will pit you against Skynet this December

View
‘Apex Legends’ Season 3 launches with a new hero on October 1st

‘Apex Legends’ Season 3 launches with a new hero on October 1st

View
Inside Apple's redesigned 'cube' store in New York City

Inside Apple's redesigned 'cube' store in New York City

View
Huawei’s Mate 30 Pro has a 'quad-camera' and a vegan leather option

Huawei’s Mate 30 Pro has a 'quad-camera' and a vegan leather option

View

From around the web

Page 1Page 1ear iconeye iconFill 23text filevr