Of all the AI-related features inside the Pixel 2 and Pixel 2 XL, the portrait mode is arguably the most impressive -- Google manages to produce dramatic-looking depth-of-field effects without relying on dual cameras or other exotic hardware. And now, it's sharing some of those secrets with the rest of the world. The company has opened up the source code for DeepLab-v3+, an AI-based image segmentation technology similar to that which helps Pixel 2 phones separate the foreground and background. It uses a neural network to detect the outlines of foreground objects, helping to classify the objects you care about in a scene while ignoring those you don't.
This doesn't guarantee that new phones or camera apps will take Pixel 2-quality portraits, although it does open that possibility. And really, phone photos aren't the point. Google researchers are hoping that both academics and industry figures will use the source code to not only improve on the technology, but find uses that Google hasn't anticipated. This could be used for object detection and many other tasks where spotting boundaries could come in handy.
Update: Google has since issued an update clarifying that this isn't the technology from the Pixel 2. It could, however, produce results similar to those of the Pixel 2. We've updated our story accordingly.