2018-11-08
For the past year or so, smart twitter people have been publicly worrying about the political impact of Deepfakes, where a video is altered (or fabricated from nearly whole cloth), that shows events that didn’t happen, or didn’t happen the way they were portrayed. The most major source of concern, for me, is that a video can be shared with a watermark from a trusted source, but be a modified version of the video originally shown by that trusted source (e.g. news agency).
This is a scheme that could defend against this sort of behaviour by authenticating videos as originating with a known source, even after being transcoded, maybe even embedded in some other video, or so forth.
Then, in playback:
This scheme or some variation on it should, in theory, be resilient to:
Any other modification, such as speeding the video up, changing the words, cutting out small sections, etc., should all fail the hash and display a warning to the user.