
VeraSnap Solves the Other Half: Building Cryptographic Proof of Reality in a World That Can't Detect Fakes
Last week we published "Four Events in 24 Hours Exposed the Same Gap." Microsoft said detection doesn't work. Samsung shipped labels for AI content. Canada couldn't verify OpenAI's safety claims. Australia prosecuted deepfakes without forensic infrastructure. All four pointed at the same blind spot: no one can prove what AI refused to generate. CAP-SRP addresses that — the "output side." This article addresses the other half: the "input side." How do you prove a photograph is real? TL;DR The AI provenance ecosystem has two halves. CAP-SRP proves what AI systems refused to generate. VeraSnap (implementing the Content Provenance Protocol / CPP) proves what cameras actually captured. Together, they close the loop: every piece of digital media has either a generation provenance chain (C2PA + CAP-SRP) or a capture provenance chain (CPP / VeraSnap) — or it has neither, and that absence is informative. This article walks through: Why detection-based approaches fail (Microsoft's own data) How
Continue reading on Dev.to
Opens in a new tab




