Advertisement

Researchers say current AI watermarks are trivial to remove

Studies show it's pretty easy to evade watermarks.

Google

A traditional watermark is a visible logo or pattern that can appear on anything from the cash in your wallet to a postage stamp, all in the name of discouraging counterfeiting. You might have seen a watermark in the preview to your graduation photos, for example. But in the case of artificial intelligence, it takes a slight twist, as most things in the space usually do.

In the context of AI, watermarking can allow a computer to detect if text or an image is generated from artificial intelligence. But why watermark images to begin with? Generative art creates a prime breeding ground for the creation of deep fakes and other misinformation. So despite being invisible to the naked eye, watermarks can combat the misuse of AI-generated content and can even be integrated into machine-learning programs developed by tech giants like Google. Other major players in the space, everyone from OpenAI to Meta and Amazon, have pledged to develop watermarking technology to combat misinformation.

That’s why computer science researchers at the University of Maryland (UMD) took it upon themselves to examine and understand how easy it is for bad actors to add or remove watermarks. Soheil Feizi, a professor at UMD told Wired that his team’s findings confirm his skepticism that there aren’t any reliable watermarking applications at this point. The researchers were able to easily evade the current methods of watermarking during testing and found it even easier to add fake emblems to images that weren’t generated by AI. But beyond testing how easy it is to evade watermarks, one UMD team notably developed a watermark that is near impossible to remove from content without completely compromising the intellectual property. This application makes it possible to detect when products are stolen.

Google
Google

In a similar collaborative research effort between the University of California, Santa Barbara and Carnegie Mellon University, researchers found that through simulated attacks, watermarks were easily removable. The paper discerns that there are two distinct methods for eliminating watermarks through these attacks: destructive and constructive approaches. When it comes to destructive attacks, the bad actors can treat watermarks like it's a part of the image. Tweaking things like the brightness, contrast or using JPEG compression, or even simply rotating an image can remove a watermark. However, the catch here is that while these methods do get rid of the watermark, they also mess with the image quality, making it noticeably worse. In a constructive attack, watermark removal is a bit more sensitive and uses techniques like the good old Gaussian blur.

Although watermarking AI-generated content needs to improve before it can successfully navigate simulated tests similar to those featured in these research studies, it's easy to envision a scenario where digital watermarking becomes a competitive race against hackers. Until a new standard is developed, we can only hope for the best when it comes to new tools like Google’s SynthID, an identification tool for generative art, which will continue to get workshopped by developers until it hits the mainstream.

But the timing for innovation by thought leaders could not be better. With the 2024 presidential election in the United States poised to take center stage soon, AI-generated content could play a huge role in swaying political opinion with things like deep fake ads. The Biden administration has even made note of the issue, citing that there are reasonable concerns for how artificial intelligence can be used for disruptive purposes, particularly in the realm of misinformation.