MIT made an AI that can detect and create fake images

It offers valuable insight into how neural networks learn context.

Creating digital renderings and editing images can take hours, but researchers from MIT and IBM want to change that. They've trained AI to generate photographic images from scratch and to intelligently edit objects inside them. While this could be beneficial for artists and designers, it also offers insight into how neural networks learn context, and the team hopes to leverage the tool to spot fake or altered images.

Named GANpaint Studio, the tool is available as a free demo. Rather than manually add a tree to an image, you can tell the tool where you want the object and it will add one that matches the scene. You can erase objects too, like stools from an image of a kitchen. It's still a work in progress, but the team hopes GANpaint Studio might one day edit video clips. If, for instance, an essential prop were left out of a film scene, editors could use AI to add it in later.

As they were building GANpaint Studios, the researchers were surprised to discover that the system learned simple rules about the relationships between objects -- like that a door does not belong in the sky. Because GANpaint Studio uses a GAN -- a set of neural networks developed to compete against each other -- it has to expose it's internal reasoning for decisions like preventing a cloud from appearing in the grass. That insight could help researchers better understand how neural networks learn context and what we think of as common sense.

While GANpaint Studio makes it easy to create fake images, it could also help computer scientists learn to spot them. "You need to know your opponent before you can defend against it," said Jun-Yan Zhu, who co-authored a paper on the tool. The researchers will present their work at a conference next month. In the meantime, you can give GANpaint Studios a spin.