Google used a 64-camera rig to train its portrait lighting AI

The company has detailed the technology behind Portrait Light.

Sponsored Links

Google

Google’s Portrait Light feature can make some of your more mediocre photos look a lot better by giving you a way to change their lighting direction and intensity. The tech giant launched the AI-based lighting feature in September for the Pixel 4a 5G and Pixel 5 before giving older Pixel phones access to it. Now, Google has published a post on its AI blog explaining the technology behind Portrait Light, including how it trained its machine learning models.

To be able to train one of those models to add lighting to a photo from a certain direction, Google needed millions of portraits with and without extra lighting from different directions. The company used a spherical lighting rig with 64 cameras and 331 individually programmable LED light sources to capture the photos it needed. It photographed 70 people with different skin tones, face shapes, genders, hairstyles and even clothing and accessories, illuminating them inside the sphere one light at a time. The company also trained a model to determine the best illumination profile for automatic light placement. Its post has all the technical details, if you want to know how the feature came to be.

Google
Google

On the Pixel 4, Pixel 4a, Pixel 4a (5G) and Pixel 5, the Pixel Camera automatically applies Portrait Light in default mode and to Night Sight photos that have people in them. On older Pixels, you can find the option to switch it on under the Adjust section when you edit a photo of a human. If you want to be able to manually re-position and adjust the brightness of the lighting, you’ll have too shoot in Portrait Mode and edit the image in Google Photos.

Popular on Engadget