You can just generate an AI photo, print it and hold it up in front of a camera and now it's "not" AI any longer and it's a real photo taken with a real camera. I don't put any value in C2PA for AI detection.
Besides the possibility of extracting the private key from the camera, doesn't this still leave the analog hole wide open? One could set up a high-DPI screen or similar in front of the lens, displaying anything they want, and have it captured with a genuine signature. This effort seems much more noble than DRM schemes, but ultimately it's the same fight against reality.
You can just generate an AI photo, print it and hold it up in front of a camera and now it's "not" AI any longer and it's a real photo taken with a real camera. I don't put any value in C2PA for AI detection.
Besides the possibility of extracting the private key from the camera, doesn't this still leave the analog hole wide open? One could set up a high-DPI screen or similar in front of the lens, displaying anything they want, and have it captured with a genuine signature. This effort seems much more noble than DRM schemes, but ultimately it's the same fight against reality.
It's worse than you think: there are official C2PA mobile apps which will apply C2PA tags to anything captured by a "camera".
This includes external USB cameras, such as HDMI capture cards.
You can probably see where this is going.
I didn't read your comment first, but you are absolutely correct.
https://news.ycombinator.com/item?id=45421293
There are other uses for C2PA, but detecting AI is a weak one.