Advertisement

ChatGPT will digitally tag images generated by DALL-E 3 to help battle misinformation

Though OpenAI admits that this isn't a silver bullet.

OpenAI

In an age where fraudsters are using generative AI to scam money or tarnish one's reputation, tech firms are coming up with methods to help users verify content — at least still images, to begin with. As teased in its 2024 misinformation strategy, OpenAI is now including provenance metadata in images generated with ChatGPT on the web and DALL-E 3 API, with their mobile counterparts receiving the same upgrade by February 12.

The metadata follows the C2PA (Coalition for Content Provenance and Authenticity) open standard, and when one such image is uploaded to the Content Credentials Verify tool, you'll be able to trace its provenance lineage. For instance, an image generated using ChatGPT will show an initial metadata manifest indicating its DALL-E 3 API origin, followed by a second metadata manifest showing that it surfaced in ChatGPT.

Despite the fancy cryptographic tech behind the C2PA standard, this verification method only works when the metadata is intact; the tool is of no use if you upload an AI-generated image sans metadata — as is the case with any screenshot or uploaded image on social media. Unsurprisingly, the current sample images on the official DALL-E 3 page returned blank as well. On its FAQ page, OpenAI admits that this isn't a silver bullet to addressing the misinformation war, but it believes that the key is to encourage users to actively look for such signals.

While OpenAI's latest effort on thwarting fake content is currently limited to still images, Google's DeepMind already has SynthID for digitally watermarking both images and audio generated by AI. Meanwhile, Meta has been testing invisible watermarking via its AI image generator, which may be less prone to tampering.