In an attempt to identify AI-generated images from real images, Google's Deep Mind in partnership with Google Cloud has launched a beta version of SynthID. This technology embeds a digital watermark into the image's pixels and is imperceptible to humans, but still readable by computers.
As stated in Google's press release: "While generative AI can unlock huge creative potential, it also presents new risks, like enabling creators to spread false information — both intentionally or unintentionally. Being able to identify AI-generated content is critical to empowering people with knowledge of when they’re interacting with generated media, and for helping prevent the spread of misinformation.
SynthID is being released to a limited number of Vertex AI customers using Imagen, one of our latest text-to-image models that uses input text to create photorealistic images"
It is interesting to note that both Google and the partnership of Canon, Reuters & Starling Lab have released projects this week to combat synthetic images and image manipulation. As photographers we can only applaud these efforts that help shield our industry to a degree from the ever popular and increasingly realistic AI images.
You can read more on the Deep Mind website here.