The White House wants to ‘cryptographically verify’ videos of Joe Biden so viewers don’t mistake them for AI deepfakes::Biden’s AI advisor Ben Buchanan said a method of clearly verifying White House releases is “in the works.”

  • CyberSeeker@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    168
    ·
    9 months ago

    Digital signature as a means of non repudiation is exactly the way this should be done. Any official docs or releases should be signed and easily verifiable by any public official.

    • mods_are_assholes@lemmy.world
      link
      fedilink
      English
      arrow-up
      78
      ·
      9 months ago

      Maybe deepfakes are enough of a scare that this becomes standard practice, and protects encryption from getting government backdoors.

        • mods_are_assholes@lemmy.world
          link
          fedilink
          English
          arrow-up
          20
          ·
          9 months ago

          Hey, congresscritters didn’t give a shit about robocalls till they were the ones getting robocalled.

          We had a do not call list within a year and a half.

          That’s the secret, make it affect them personally.

          • Daft_ish@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            9 months ago

            Doesn’t that prove that government officials lack empathy? We see it again and again but still we keep putting these unfeeling bastards in charge.

            • mods_are_assholes@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              Well sociopaths are really good at navigating power hierarchies and I’m not sure there is an ethical way of keeping them from holding office.

              • Natanael@slrpnk.net
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 months ago

                It really depends on their motivation. The ones we need to keep out are the ones who enjoy hurting others or don’t care at all.

    • Otter@lemmy.ca
      link
      fedilink
      English
      arrow-up
      16
      ·
      9 months ago

      Would someone have a high level overview or ELI5 of what this would look like, especially for the average user. Would we need special apps to verify it? How would it work for stuff posted to social media

      linking an article is also ok :)

      • AbouBenAdhem@lemmy.world
        link
        fedilink
        English
        arrow-up
        21
        ·
        edit-2
        9 months ago

        Depending on the implementation, there are two cryptographic functions that might be used (perhaps in conjunction):

        • Cryptographic hash: An arbitrary amount of data (like a video file) is used to create a “hash”—a shorter, (effectively) unique text string. Anyone can run the file through the same function to see if it produces the same hash; if even a single bit of the file is changed, the hash will be completely different and you’ll know the data was altered.

        • Public key cryptography: A pair of keys are created, one of which can only encrypt data (but can’t decrypt its own output), and the other, “public” key can only decrypt data that was encrypted by the first key. Users (like the White House) can post their public key on their website; then if a subsequent message purporting to come from that user can be decrypted using their public key, it proves it came from them.

        • Serinus@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          9 months ago

          a shorter, (effectively) unique text string

          A note on this. There are other videos that will hash to the same value as a legitimate video. Finding one that is coherent is extraordinarily difficult. Maybe a state actor could do it?

          But for practical purposes, it’ll do the job. Hell, if a doctored video with the same hash comes out, the White House could just say no, we punished this one, and that alone would be remarkable.

          • AbouBenAdhem@lemmy.world
            link
            fedilink
            English
            arrow-up
            8
            ·
            edit-2
            9 months ago

            Finding one that is coherent is extraordinarily difficult.

            You’d need to find one that was not just coherent, but that looked convincing and differed in a way that was useful to you—and that likely wouldn’t be guaranteed, even theoretically.

            • Natanael@slrpnk.net
              link
              fedilink
              English
              arrow-up
              2
              ·
              9 months ago

              Pigeon hole principle says it does for any file substantially longer than the hash value length, but it’s going to be hard to find

            • ReveredOxygen@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              9 months ago

              Even for a 4096 bit hash (which isn’t used afaik, usually only 1024 bit is used (but this could be outdated)), you only need to change 4096 bits on average. Even for a still 1080p image, that’s 1920x1080 pixels. If you change the least significant bit of each color channel, you get 6,220,800 bits you can change within anyone noticing. That means on average there are 1,518 identical-looking variations of any image with a given 4096 bit hash, on average. This goes down a lot when you factor in compression: those least significant bits aren’t going to stay the same. But using a video brings it up by orders of magnitude: rather than one image, you can tweak colors in every frame The difficulty doesn’t come from the existence, it comes because you need to check 2⁵¹² = 10¹⁵⁴ different images to guarantee you’ll find a match. Hash functions are designed to take a while to compute, so you’d have to run a supercomputer for an extremely long time to brute force a hash collision

              • Natanael@slrpnk.net
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 months ago

                Most hash functions are 256 bit (they’re symmetric functions, they don’t need more in most cases).

                There are arbitrary length functions (called XOF instead of hash) which built similarly (used when you need to generate longer random looking outputs).

                Other than that, yeah, math shows you don’t need to change more data in the file than the length of the hash function internal state or output length (whichever is less) to create a collision. The reason they’re still secure is because it’s still extremely difficult to reverse the function or bruteforce 2^256 possible inputs.

                • ReveredOxygen@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  9 months ago

                  Yeah I was using a high length at first because even if you overestimate, that’s still a lot. I did 512 for the second because I don’t know a ton about cryptography but that’s the largest SHA output

          • CyberSeeker@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            2
            ·
            9 months ago

            There are other videos that will hash to the same value

            This concept is known as ‘collision’ in cryptography. While technically true for weaker key sizes, there are entire fields of mathematics dedicated to probably ensuring collisions are cosmically unlikely. MD5 and SHA-1 have a small enough key space for collisions to be intentionally generated in a reasonable timeframe, which is why they have been deprecated for several years.

            To my knowledge, SHA-2 with sufficiently large key size (2048) is still okay within the scope of modern computing, but beyond that, you’ll want to use Dilithium or Kyber CRYSTALS for quantum resistance.

            • Natanael@slrpnk.net
              link
              fedilink
              English
              arrow-up
              3
              ·
              9 months ago

              SHA family and MD5 do not have keys. SHA1 and MD5 are insecure due to structural weaknesses in the algorithm.

              Also, 2048 bits apply to RSA asymmetric keypairs, but SHA1 is 160 bits with similarly sized internal state and SHA256 is as the name says 256 bits.

              ECC is a public key algorithm which can have 256 bit keys.

              Dilithium is indeed a post quantum digital signature algorithm, which would replace ECC and RSA. But you’d use it WITH a SHA256 hash (or SHA3).

      • AtHeartEngineer@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        9 months ago

        The best way this could be handled is a green check mark near the video that you could click on it and it would give you all the meta data of the video (location, time, source, etc) with a digital signature (what would look like a random string of text) that you could click on and your browser would show you the chain of trust, where the signature came from, that it’s valid, probably the manufacturer of the equipment it was recorded on, etc.

        • Natanael@slrpnk.net
          link
          fedilink
          English
          arrow-up
          2
          ·
          9 months ago

          Do not show a checkmark by default! This is why cryptographers kept telling browsers to de-emphasize the lock icon on TLS (HTTPS) websites. You want to display the claimed author and if you’re able to verify keypair authenticity too or not.

          • AtHeartEngineer@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            Fair point, I agree with this. There should probably be another icon in the browser that shows if all, some, or none of the media on a page has signatures that can be validated. Though that gets messy as well, because what is “media”? Things can be displayed in a web canvas or SVG that appears to be a regular image, when in reality it’s rendered on the fly.

            Security and cryptography UX is hard. Good point, thanks for bringing that up! Btw, this is kind of my field.

            • Natanael@slrpnk.net
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              I run /r/crypto at reddit (not so active these days due to needing to keep it locked because of spam bots, but it’s not dead yet), usability issues like this are way too common

              • AtHeartEngineer@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 months ago

                I ran /r/cryptotechnology for years, and am good friends with the /r/cc mods. Reddit is a mess though, especially in the crypto areas.

      • Pup Biru@aussie.zone
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        edit-2
        9 months ago

        it would potentially be associated with a law that states that you must not misrepresent a “verified” UI element like a check mark etc, and whilst they could technically add a verified mark wherever they like, the law would prevent that - at least for US companies

        it may work in the same way as hardware certifications - i believe that HDMI has a certification standard that cables and devices must be manufactured to certain specifications to bear the HDMI logo, and the HDMI logo is trademarked so using it without permission is illegal… it doesn’t stop cheap knock offs, but it means if you buy things in stores in most US-aligned countries that bear the HDMI mark, they’re going to work

        • Kairos@lemmy.today
          link
          fedilink
          English
          arrow-up
          7
          ·
          edit-2
          9 months ago

          There’s already some kind of legal structure for what you’re talking about: trademark. It’s called “I’m Joe Biden and I approve this message.”

          If you’re talking about HDCP you can break that with an HDMI splitter so IDK.

          • Captain Aggravated@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            8
            ·
            9 months ago

            Relying on trademark law to combat deepfake disinformation campaigns has the same energy as “Murder is already illegal, we don’t need gun control.”

          • Pup Biru@aussie.zone
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            9 months ago

            TLDR: trademark law yes, combined with a cryptographic signature in the video metadata… if a platform sees and verifies the signature, they are required to put the verified logo prominently around the video

            i’m not talking about HDCP no. i’m talking about the certification process for HDMI, USB, etc

            (random site that i know nothing about): https://www.pacroban.com/en-au/blogs/news/hdmi-certifications-what-they-mean-and-why-they-matter

            you’re right; that’s trademark law. basically you’re only allowed to put the HDMI logo on products that are certified as HDMI compatible, which has specifications on the manufacturing quality of cables etc

            in this case, you’d only be able to put the verified logo next to videos that are cryptographically signed in the metadata as originating from the whitehouse (or probably better, some federal election authority who signs any campaign videos as certified/legitimate: in australia we have the AEC - australian electoral commission - a federal body that runs our federal elections and investigations election issues, etc)

            now this of course wouldn’t work for sites outside of US control, but it would at least slow the flow of deepfakes on facebook, instagram, tiktok, the platform formerly known as twitter… assuming they implemented it, and assuming the govt enforced it

            • brbposting@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              Once an original video is cryptographically signed, could future uploads be automatically verified based on pixels plus audio? Could allow for commentary to clip the original.

              Might need some kind of minimum length restriction to prevent deceptive editing which simply (but carefully) scrambles original footage.

              • Pup Biru@aussie.zone
                link
                fedilink
                English
                arrow-up
                3
                ·
                9 months ago

                not really… signing is only possible on exact copies (like byte exact; not even “the same image” but the same image, formatted the same, without being resized, etc)… there are things called perceptual hashes, and ways of checking if images are similar, but cryptography wouldn’t really help there

      • General_Effort@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        9 months ago

        For the average end-user, it would look like “https”. You would not have to know anything about the technical background. Your browser or other media player would display a little icon showing that the media is verified by some trusted institution and you could learn more with a click.

        In practice, I see some challenges. You could already go to the source via https, EG whitehouse.gov, and verify it that way. An additional benefit exists only if you can verify media that have been re-uploaded elsewhere. Now the user needs to check that the media was not just signed by someone (EG whitehouse.gov. ru), but if it was really signed by the right institution.

        • TheKingBee@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          9 months ago

          As someone points out above, this just gives them the power to not authenticate real videos that make them look bad…

          • General_Effort@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 months ago

            Videos by third parties, like Trump’s pussy grabber clip, would obviously have to be signed by them. After having thought about it, I believe this is a non-starter.

            It just won’t be as good as https. Such a signing scheme only makes sense if the media is shared away from the original website. That means you can’t just take a quick look at the address bar to make sure you are not getting phished. That doesn’t work if it could be any news agency. You have to make sure that the signer is really a trusted agency and not some scammy lookalike. That takes too much care for casual use, which defeats the purpose.

            Also, news agencies don’t have much of an incentive to allow sharing their media. Any cryptographic signature would only make sense for them if directs users to their site, where they can make money. Maybe the potential for more clicks - basically a kind of clickable watermark on media - could make this take off.

          • dejected_warp_core@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            9 months ago

            I honestly feel strategies like this should be mitigated by technically savvy journalism, or even citizen journalism. 3rd parties can sign and redistribute media in the public domain, vouching for their origin. While that doesn’t cover all the unsigned copies in existence, it provides a foothold for more sophisticated verification mechanisms like a “tineye” style search for media origin.

      • Starbuck@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        9 months ago

        Adobe is actually one of the leading actors in this field, take a look at the Content Authenticity Initiative (https://contentauthenticity.org/)

        Like the other person said, it’s based on cryptographic hashing and signing. Basically the standard would embed metadata into the image.

      • Cocodapuf@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        It needs some kind of handler, but we mostly have those in place. A web browser could be the handler for instance. A web browser has the green dot on the upper left, telling you a page is secure, that https is on and valid. This could work like that, the browser can verify the video and display a green or red dot in the corner, the user could just mouse over it/tap on it to see who it’s verified to be from. But it’s up to the user to mouse over it and check if it says whitehouse.gov or dr-evil-mwahahaha.biz

      • dejected_warp_core@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        9 months ago

        TL;DR: one day the user will see an overlay or notification that shows an image/movie is verified as from a known source. No extra software required.

        Honestly, I can see this working great in future web browsers. Much like the padlock in the URL bar, we could see something on images that are verified. The image could display a padlock in the lower-left corner or something, along with the name of the source, demonstrating that it’s a securely verified asset. “Normal” images would be unaffected. The big problem is how to put something on the page that cannot be faked by other means.

        It’s a little more complicated for software like phone apps for X or Facebook, but doable. The problem is that those products must choose to add this feature. Hopefully, losing reputation to being swamped with unverifiable media will be motivation enough to do so.

        The underlying verification process is complex, but should be similar to existing technology (e.g. GPG). The key is that images and movies typically contain a “scratch pad” area in the file for miscellaneous stuff (metadata). This is where the image’s author can add a cryptographic signature for the file itself. The user would never even know it’s there.

      • Ð Greıt Þu̇mpkin@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        9 months ago

        Probably you’d notice a bit of extra time posting for the signature to be added, but that’s about it, the responsibility for verifying the signature would fall to the owners of the social media site and in the circumstances where someone asks for a verification, basically imagine it as a libel case on fast forward, you file a claim saying “I never said that”, they check signatures, they shrug and press the delete button and erase the post, crossposts, and if it’s really good screencap posts and those crossposts of the thing you did not say but is still being attributed falsely to your account or person.

        It basically gives absolute control of a person’s own image and voice to themself, unless a piece of media is provable to have been made with that person’s consent, or by that person themself, it can be wiped from the internet no trouble.

        Where it comes to second party posters, news agencies and such, it’d be more complicated but more or less the same, with the added step that a news agency may be required to provide some supporting evidence that what they said is not some kind of misrepresentation or such as the offended party filing the takedown might be trying to insist for the sake of their public image.

        Of course there could still be a YouTube “Stats for Nerds”-esque addin to the options tab on a given post that allows you to sign-check it against the account it’s attributing something to, and a verified account system could be developed that adds a layer of signing that specifically identifies a published account, like say for prominent news reporters/politicians/cultural leaders/celebrities, that get into their own feed so you can look at them or not depending on how ya be feelin’ that particular scroll session.

    • Pup Biru@aussie.zone
      link
      fedilink
      English
      arrow-up
      7
      ·
      9 months ago

      i wouldn’t say signature exactly, because that ensures that a video hasn’t been altered in any way: no re-encoded, resized, cropped, trimmed, etc… platforms almost always do some of these things to videos, even if it’s not noticeable to the end-user

      there are perceptual hashes, but i’m not sure if they work in a way that covers all those things or if they’re secure hashes. i would assume not

      perhaps platforms would read the metadata in a video for a signature and have to serve the video entirely unaltered if it’s there?

      • thantik@lemmy.world
        link
        fedilink
        English
        arrow-up
        10
        ·
        edit-2
        9 months ago

        You don’t need to bother with cryptographically verifying downstream videos, only the source video needs to be able to be cryptographically verified. That way you have an unedited, untampered cut that can be verified to be factually accurate to the broadcast.

        The White House could serve the video themselves if they so wanted to. Just use something similar to PGP for signature validation and voila. Studios can still do all the editing, cutting, etc - it shouldn’t be up to the end user to do the footwork on this, just for the studios to provide a kind of ‘chain of custody’ - they can point to the original verification video for anyone to compare to; in order to make sure alterations are things such as simple cuts, and not anything more than that.

        • Pup Biru@aussie.zone
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          9 months ago

          you don’t even need to cryptographically verify in that case because you already have a trusted authority: the whitehouse… of the video is on the whitehouse website, it’s trusted with no cryptography needed

          the technical solutions only come into play when you’re trying to modify the video and still accurately show that it’s sourced from something verifiable

          heck you could even have a standard where if a video adds a signature to itself, editing software will add the signature of the original, a canonical immutable link to the file, and timestamps for any cuts to the video… that way you (and by you i mean anyone; likely hidden from the user) can load up a video and be able to link to the canonical version to verify

          in this case, verification using ML would actually be much easier because you (servers) just download the canonical video, cut it as per the metadata, and compare what’s there to what’s in the current video

      • AbouBenAdhem@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        9 months ago

        Rather that using a hash of the video data, you could just include within the video the timestamp of when it was originally posted, encrypted with the White House’s private key.

          • AbouBenAdhem@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            9 months ago

            It does if you can also verify the date of the file, because the modified file will be newer than the timestamp. An immutable record of when the file was first posted (on, say, YouTube) lets you verify which version is the source.

            • Natanael@slrpnk.net
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              No it does not because you can cut out the timestamp and put it into anything if the timestamp doesn’t encode anything about the frame contents.

              It is always possible to backdate file edits.

              Sure, public digital timestamping services exists, but most people will not check. Also once again, an older timestamp can simply be cut out of one file and posted into another file.

              You absolutely must embedd something which identifies what the media file is, which can be used to verify ALL of the contents with cryptographic signatures. This may additionally refer to a verifiable timestamp at some timestamping service.

      • Natanael@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 months ago

        Apple’s scrapped on-device CSAM scanning was based on perceptual hashes.

        The first collision demo breaking them showed up in hours with images that looked glitched. After just a week the newest demos produced flawless images with collisions against known perceptual hash values.

        In theory you could create some ML-ish compact learning algorithm and use the compressed model as a perceptual hash, but I’m not convinced this can be secure enough unless it’s allowed to be large enough, as in some % of the original’s file size.

        • Pup Biru@aussie.zone
          link
          fedilink
          English
          arrow-up
          1
          ·
          9 months ago

          you can definitely produced perceptual hashes that collide, but really you’re not just talking about a collision, you’re talking about a collision that’s also useful in subverting an election, AND that’s been generated using ML which is something that’s still kinda shakey to start with

          • Natanael@slrpnk.net
            link
            fedilink
            English
            arrow-up
            1
            ·
            9 months ago

            Perceptual hash collision generators can take arbitrary images and tweak them in invisible ways to make them collide with whichever hash value you want.

            • Pup Biru@aussie.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              9 months ago

              from the comment above, it seems like it took a week for a single image/frame though… it’s possible sure but so is a collision in a regular hash function… at some point it just becomes too expensive to be worth it, AND the phash here isn’t being used as security because the security is that the original was posted on some source of truth site (eg the whitehouse)

              • Natanael@slrpnk.net
                link
                fedilink
                English
                arrow-up
                1
                ·
                9 months ago

                No, it took a week to refine the attack algorithm, the collision generation itself is fast

                The point of perceptual hashes is to let you check if two things are similar enough after transformations like scaling and reencoding, so you can’t rely on that here

                • Pup Biru@aussie.zone
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  9 months ago

                  oh yup that’s a very fair point then! you certainly wouldn’t use it for security in that case, however there are a lot of ways to implement this that don’t rely on the security of the hash function, but just uses it (for example) to point to somewhere in a trusted source to manually validate that they’re the same

                  we already have the trust frameworks; that’s unnecessary… we just need to automatically validate (or at least provide automatic verifyability) that a video posted on some 3rd party - probably friendly or at least cooperative - platform represents reality

                  • Natanael@slrpnk.net
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    9 months ago

                    I think the best bet is really video formats with multiple embedded streams carrying complementary frame data (already exists) so you decide video quality based on how many streams you want to merge in playback.

                    If you then hashed the streams independently and signed the list of hashes, then you have a video file which can be “compressed” without breaking the signature by stripping out some streams.