Google's Quest to Spot AI-Generated Images with Digital Watermarks

Google is testing SynthID, a digital watermark technology developed by DeepMind, to identify AI-generated images and combat disinformation. The invisible watermark subtly alters image pixels to be detectable by computers while remaining unnoticed by humans. The system aims to distinguish real images from AI-created ones. Although not foolproof, it addresses challenges in recognizing manipulated images. This experimental launch is part of Google's commitment to safer AI development. Other tech companies are also exploring watermarking methods to enhance transparency and detect AI-generated content. China has even banned AI-generated images without watermarks.

Google's Quest to Spot AI-Generated Images with Digital Watermarks

Google is currently testing a digital watermark technology designed to identify images produced by artificial intelligence (AI) as part of its efforts to combat disinformation.

The project, named SynthID, is a creation of Google's AI subsidiary, DeepMind. Its purpose is to recognize images that are generated by machines. The technology functions by introducing subtle alterations to individual pixels within images, rendering watermarks that remain imperceptible to the human eye but can be detected by computers.

Nonetheless, DeepMind has acknowledged that this approach is not completely immune to highly advanced image manipulations.

As AI progresses, distinguishing between genuine photographs and AI-generated ones has grown increasingly intricate, as exemplified by the BBC Bitesize's AI or Real quiz.

AI-driven image generation tools have become mainstream, with Midjourney being a prominent example and boasting over 14.5 million users. These tools allow users to swiftly create images based on uncomplicated text prompts, raising questions regarding copyright and ownership on a global scale.

Notably, Google has developed its own image generator, Imagen, and the watermark system is exclusively applicable to images produced using this tool.

Traditionally, watermarks are logos or text overlays on images that serve the dual purpose of asserting ownership and acting as a deterrent against unauthorized copying and usage. On platforms like the BBC News website, copyright watermarks are commonly placed in the bottom-left corner of images.

However, these conventional watermarks are inadequate for identifying AI-generated images as they can be easily edited out or cropped. To address this limitation, Google's system employs an essentially invisible watermark. This innovation enables users to swiftly determine whether an image is authentic or machine-generated through the utilization of Google's software.

Pushmeet Kohli, Head of Research at DeepMind, explained that their system alters images so subtly that the changes are imperceptible to human observers. Unlike hashing, an existing technique used to create digital fingerprints of known instances of harmful content, DeepMind's approach remains effective even after an image has been edited or cropped.

Kohli cautioned that the system is in an "experimental launch" stage and its resilience needs to be tested through usage.

In July, Google joined six other prominent AI companies in signing a voluntary agreement in the US. This agreement focused on ensuring the safe development and use of AI, including the implementation of watermarks to enable individuals to identify computer-generated images.

While this step was applauded by some, Claire Leibowicz from the Partnership on AI campaign group highlighted the need for more coordination between businesses. She called for standardization to provide clarity on which methods are effective in detecting AI-generated content.

Major tech players like Microsoft and Amazon have also committed to watermarking some AI-generated content. Meta, previously known as Facebook, has revealed plans to add watermarks to videos created by its unreleased video generator, Make-A-Video, to enhance transparency over AI-generated content.

Earlier this year, China went a step further and banned AI-generated images lacking watermarks. Firms like Alibaba complied by incorporating watermarks on creations generated through its cloud division's text-to-image tool, Tongyi Wanxiang.