Even Deep Tom Cruise thinks this latest generation of digitally fabricated faces have walked out of the uncanny valley and siddled right on up to photorealism. While the possibilities for entertainment using this tech are boundless, deep fake videos have the potential to severely disrupt the public’s trust in government and our elected officials — even the ability to believe our own eyes. On Wednesday, Facebook and Michigan State University debuted a novel method of not just detecting deep fakes but discovering which generative model produced it by reverse engineering the image itself.
Beyond telling you if an image is a deep fake or not, many current detection systems can tell whether the image was generated in a model that the system saw during its training — known as a “close-set” classification. Problem is, if the image was created by a generative model that the detector system wasn’t trained on then the system won’t have the previous experience to be able to spot the fake.
The FB-MSU reverse engineering technique, while not exactly a cutting-edge methodology, “relies on uncovering the unique patterns behind the AI model used to generate a single deep fake image,” the team explained in a Wednesday blog post.
“We begin with image attribution and then work on discovering properties of the model that was used to generate the image,” the team continued. “By generalizing image attribution to open-set recognition, we can infer more information about the generative model used to create a deepfake that goes beyond recognizing that it has not been seen before.”
What’s more, this system can compare and trace similarities across a series of deep fakes, enabling researchers to trace groups of falsified images back to a single generative source, which should help social media moderators better track coordinated misinformation campaigns.
To perform this detection technique, FB-MSU researchers first ran a set of deep fake images through a Fingerprint Estimation Network. FENs are able to discern subtle patterns imprinted upon images by the specific device that made it. For digital photographs, each of these patterns is unique due to variations in their camera’s manufacturing. The same is true for deep fakes — each generative model has its own quirks that are imprinted on their creations that can be used to uncover the model’s identity based on the image itself.
Since there are effectively a limitless number of generative models out in the internet wilds, the researchers had to generalize their search for these image fingerprints. “We estimated fingerprints using different constraints based on properties of a fingerprint in general, including the fingerprint magnitude, repetitive nature, frequency range and symmetrical frequency response,” the team explained. These constraints were then fed back into the FEN, “to enforce the generated fingerprints to have these desired properties.”
Once the system could consistently separate the genuine fingerprints from the deep fakes, it took all those false fingerprints and dumped them into a parsing model to suss out their various hyperparameters. A generative model’s hyperparameters are the variables it uses to guide its self-learning process. So, if you can figure out what the various hyperparameters are, you can figure out what model used them to create that image. The Facebook team likens this to being able to identify the various engine components of a car just by listening to it idle.
Since the FB-MSU team is treading uncharted research waters with this study, there isn’t any specific baseline to compare their test results to. So instead, the team created its own and found “a much stronger and generalized correlation between generated images and the embedding space of meaningful architecture hyperparameters and loss function types, compared to a random vector of the same length and distribution.” So basically, they can’t tell objectively how good their system is, since there’s literally no other research to compare it to, but they do know it’s more effective than blind luck.