Member-only story
This Is How Google Plans To Identify AI-Generated Images

As artificial intelligence (AI) technology continues to evolve, so too does the ability of AI to generate realistic images. This has raised concerns about the potential for AI-generated images to be used to spread misinformation or create deepfakes.
To address these concerns, Google DeepMind and Google Cloud have developed a new tool called SynthID.
What is SythID?
SynthID is a digital watermarking technique that can be used to identify AI-generated images with high accuracy. The watermark is imperceptible to the human eye, but it can be detected by SynthID’s identification software.

How SynthID Works
The watermark is generated using a technique called adversarial learning. In adversarial learning, two AI models are trained against each other. One model, the generator, is responsible for creating realistic images. The other model, the discriminator, is responsible for distinguishing between real and fake images.
The generator is trained to create images that are so realistic that the discriminator cannot tell them apart from real images. The discriminator is trained to become better at distinguishing between real and fake images.
Over time, the generator and discriminator learn to improve their respective skills. The generator learns to create more realistic images, and the discriminator learns to become better at distinguishing between real and fake images.
Once the generator and discriminator have reached a certain level of proficiency, the watermark is generated using the generator. The watermark is then embedded into the images that the generator produces.

To identify an AI-generated image, SynthID’s identification software scans the image for the watermark. If the watermark is found, SynthID can identify the image as being AI-generated with high accuracy.