A new tool allows you to watermark and identify synthetic images created by Imagen
AI-generated images are becoming more popular every day. But how can we better identify them, especially when they seem so realistic?
Today, in partnership with Google Cloudwe are launching a beta version of Synthesizer ID, an AI-generated image watermarking and identification tool. This technology embeds a digital watermark directly into the pixels of an image, making it imperceptible to the human eye, but detectable for identification.
SynthID is distributed to a limited number of AI Summit customers using Pictureone of our latest text-to-image conversion templates that uses entered text to create photorealistic images.
Generative AI technologies are evolving rapidly and computer-generated images, also known as “synthetic images,” are increasingly difficult to distinguish from those that were not created by an AI system.
While generative AI can unlock enormous creative potential, it also presents new risks, such as allowing creators to spread false information, whether intentionally or not. Being able to identify AI-generated content is essential to letting people know when they are interacting with generated media and to help prevent the spread of misinformation.
We are committed to connecting people with high-quality information and maintaining trust between creators and users across society. Part of that responsibility is providing users with more advanced tools to identify AI-generated images so that their images – and even some edited versions – can be identified later.

Google Cloud is the first cloud provider to offer a tool to responsibly create and confidently identify AI-generated images. This technology is part of our approach to developing and deploying responsible AI. It was developed by Google DeepMind and refined in partnership with Google Research.
SynthID is not foolproof against extreme image manipulation, but it provides a promising technical approach to enabling people and organizations to work responsibly with AI-generated content. This tool could also evolve alongside other AI models and modalities beyond imagery, such as audio, video and text.
New type of watermark for AI images
Watermarks are patterns that can be superimposed on images to identify them. From physical prints on paper to the translucent text and symbols seen in digital photos today, they have evolved throughout history.
Traditional watermarks are not enough to identify AI-generated images because they are often applied as a stamp to an image and can easily be removed. For example, discrete watermarks found in the corner of an image can be cropped using basic editing techniques.
Finding the right balance between imperceptibility and robustness to image manipulations is difficult. Highly visible watermarks, often added as a layer with a name or logo on top of an image, also present aesthetic challenges for creative or commercial purposes. Likewise, some previously developed imperceptible watermarks may be lost through simple editing techniques like resizing.

We designed SynthID to not compromise image quality and allow the watermark to remain detectable, even after modifications such as adding filters, changing colors, and saving with various compression schemes lossy, most commonly used for JPEG files.
SynthID uses two deep learning models (for watermarking and identification) that were trained together on a diverse set of images. The combined model is optimized on a series of objectives, including correctly identifying watermarked content and improving imperceptibility by visually aligning the watermark with the original content.
Robust and scalable approach
SynthID enables Vertex AI customers to create AI-generated images responsibly and identify them with confidence. Although this technology is not perfect, our internal testing shows that it holds up to many common image manipulations.
SynthID’s combined approach:
- Watermark: SynthID can add an imperceptible watermark to synthetic images produced by Imagen.
- Identification: By scanning an image for its digital watermark, SynthID can assess the likelihood of an image being created by Imagen.

This tool provides three confidence levels to interpret watermark identification results. If a digital watermark is detected, part of the image is probably generated by Imagen.
SynthID contributes to the wide range of approaches for identifying digital content. One of the most widely used methods for identifying content is through metadata, which provides information such as who created it and when. This information is stored with the image file. Digital signatures added to the metadata can then indicate whether an image has been modified.
When metadata information is intact, users can easily identify an image. However, metadata can be manually deleted or even lost when files are edited. Because SynthID’s watermark is embedded within the pixels of an image, it is compatible with other metadata-based image identification approaches and remains detectable even when the metadata is lost.
And after?
To responsibly create AI-generated content, we are committed to developing safe, secure and trustworthy approaches every step of the way, from image generation and identification to media literacy and information security.
These approaches must be robust and adaptable as generative models advance and expand to other media. We hope our SynthID technology can work with a wide range of solutions for creators and users across society, and we continue to evolve SynthID by collecting user feedback, improving its capabilities, and exploring new features.
SynthID could be extended for use on other AI models and we are excited about the possibility of integrating it into more Google products and making it available to third parties in the near future, allowing people and organizations to work responsibly with AI-generated content.
Note: The model used to produce synthetic images in this blog may be different from the model used on Imagen and Vertex AI.