The last 200 years or so were a brief, glorious blip in history when we could use recorded media – photographs, audio, videos – as evidence or proof of anything. Barely a blink of an eye, relative to the full history of civilization. Nice while it lasted, guess it’s over now.
@snarfed.org Lying with photographs isnβt new; Stalin raised it to a high art in the 1930s. https://en.wikipedia.org/wiki/The_Commissar_Vanishes It wouldnβt even be accurate to say that such lying was just recently democratised, as Photoshop has been part of the photographerβs toolkit for the last 2 decades. All the latest generative models have done is dropped the barriers to lying with images to zero.
C2PA is a fascinating end run around this. Instead of trying to avoid or prevent fakes, or detect or prove that they are fake, C2PA lets people prove that real images (etc) are real. It’s an entire cryptographic chain from the source capture device (eg a camera) through the photo toolchain (eg Photoshop) to the end display to a user (eg a web site), along with user-facing tools that can verify authenticity.
There’s a lot to it too. They don’t just verify as “real” or “fake,” they capture the entire processing chain and let you go back and forth and look at what each step did. There’s also a whole CA-style trust hierarchy for the PKI. Phew!
Kevin Kelly wrote about this!
A longer, angrier dive into this for photos specifically, from Sarah Jeong at The Verge.