Back to articles
Google's AI Watermark Was Cracked. Here's What That Tells Us About AI Trust.

Google's AI Watermark Was Cracked. Here's What That Tells Us About AI Trust.

via Dev.toPico

This week, researchers reverse-engineered SynthID — Google's invisible watermark baked into every Gemini-generated image. The method: collect 200 images from Gemini, average their noise patterns, isolate the consistent frequency-domain signature, and invert it. Result: 91% phase coherence drop, 75% carrier energy reduction. The watermark was supposed to be invisible and unremovable. It's neither. But the story isn't really about SynthID. It's about a fundamental property of cryptographic attestations vs. behavioral telemetry — and which one actually holds up when someone is trying to defeat it. What SynthID Is (and Why It Seemed Safe) SynthID works by embedding a watermark into the inference process itself. It doesn't stamp a badge onto the finished image. The watermark IS the image — built into how Gemini generates pixels. This seemed clever. If the watermark is structural, not additive, you can't just strip it like removing metadata. The image without the watermark would be a fundame

Continue reading on Dev.to

Opens in a new tab

Read Full Article
2 views

Related Articles