If AI and deep fakes can listen to a video or audio of a person and then are able to successfully reproduce such person, what does this entail for trials?

It used to be that recording audio or video would give strong information which often would weigh more than witnesses, but soon enough perfect forgery could enter the courtroom just as it’s doing in social media (where you’re not sworn to tell the truth, though the consequences are real)

I know fake information is a problem everywhere, but I started wondering what will happen when it creeps in testimonies.

How will we defend ourselves, while still using real videos or audios as proof? Or are we just doomed?

  • GamingChairModel@lemmy.world
    link
    fedilink
    arrow-up
    2
    ·
    2 months ago

    A camera that authenticates the timestamp and contents of an image is great. But it’s still limited. If I take that camera, mount it on a tripod, and take a perfect photograph of a poster of Van Gogh’s Starry Night, the resulting image will be yet another one of millions of similar copies, only with a digital signature proving that it was a newly created image today, in 2024.

    Authenticating what the camera sensor sees is only part of the problem, when the camera can be shown fake stuff, too. Special effects have been around for decades, and practical effects are even older.

    • LesserAbe@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      2 months ago

      You’re right, cameras can be tricked. As Descartes pointed out there’s very little we can truly be sure of, besides that we ourselves exist. And I think deepfakes are going to be a pretty challenging development in being confident about lots of things.

      I could imagine something like photographers with a news agency using cameras that generate cryptographically signed photos, to ward off claims that newsworthy events are fake. It would place a higher burden on naysayers, and it would also become a story in itself if it could be shown that a signed photo had been faked. It would become a cause for further investigation, it would threaten a news agency’s reputation.

      Going further I think one way we might trust people we aren’t personally standing in front of would be a cryptographic circle of trust. I “sign” that I know and trust my close circle of friends and they all do the same. When someone posts something online, I could see “oh, this person is a second degree connection, that seems fairly likely to be true” vs “this is a really crazy story if true, but I have no second or third or fourth degree connections with them, needs further investigation.”

      I’m not saying any of this will happen, just it’s potentially a way to deal with uncertainty from AI content.