More generally, there's a whole slew of gaps in authentication that could be exploited on the end device. If you want to cryptographically prove the authenticity of a photo taken, you have to prove that the app is running on a verified operating system of a trusted phone and not some emulated platform; that the operating system of the phone hasn't been tampered with or rooted; that the data stream from the camera to the phone hasn't been wired up to something that provides spoofed video/images; the list goes on and on.
Provenance is an admirable goal to counter deepfake media, but we're far from this:
> once the photo is taken, it becomes virtually impossible to alter the image without breaking the digital chain of trust that confirms its authenticity
I think that once deepfake media gets good enough, society will be forced to revert to the trust dynamics that existed ~150 years ago (e.g. before widespread photo/video recording were available)--our perception of recordings as evidence in and of themselves will have to shift.
As I've posted elsewhere in the comments here, I think there are good solutions, but it will take some cooperation to get them off the ground (adoption of standards, licenses, etc.). Here is what I've proposed, which comes pretty close to what the CAI (Content Authenticity Initiative) is doing: https://dev.to/kylepena/addressing-the-threat-of-deep-fakes-...
You can't authenticate the analog hole (1).
More generally, there's a whole slew of gaps in authentication that could be exploited on the end device. If you want to cryptographically prove the authenticity of a photo taken, you have to prove that the app is running on a verified operating system of a trusted phone and not some emulated platform; that the operating system of the phone hasn't been tampered with or rooted; that the data stream from the camera to the phone hasn't been wired up to something that provides spoofed video/images; the list goes on and on.
Provenance is an admirable goal to counter deepfake media, but we're far from this:
> once the photo is taken, it becomes virtually impossible to alter the image without breaking the digital chain of trust that confirms its authenticity
I think that once deepfake media gets good enough, society will be forced to revert to the trust dynamics that existed ~150 years ago (e.g. before widespread photo/video recording were available)--our perception of recordings as evidence in and of themselves will have to shift.
[1] https://en.wikipedia.org/wiki/Analog_hole
I've got some good solutions on this front, I think: https://dev.to/kylepena/addressing-the-threat-of-deep-fakes-...
The Nodle Network that backs this is a decentralized network of devices that act as witnesses to the location of the device taking the photo.
source: https://www.prnewswire.com/news-releases/nodle-introduces-cl...
As I've posted elsewhere in the comments here, I think there are good solutions, but it will take some cooperation to get them off the ground (adoption of standards, licenses, etc.). Here is what I've proposed, which comes pretty close to what the CAI (Content Authenticity Initiative) is doing: https://dev.to/kylepena/addressing-the-threat-of-deep-fakes-...