If AI and deep fakes can listen to a video or audio of a person and then are able to successfully reproduce such person, what does this entail for trials?

It used to be that recording audio or video would give strong information which often would weigh more than witnesses, but soon enough perfect forgery could enter the courtroom just as it’s doing in social media (where you’re not sworn to tell the truth, though the consequences are real)

I know fake information is a problem everywhere, but I started wondering what will happen when it creeps in testimonies.

How will we defend ourselves, while still using real videos or audios as proof? Or are we just doomed?

  • ColeSloth@discuss.tchncs.de
    link
    fedilink
    arrow-up
    3
    arrow-down
    4
    ·
    2 months ago

    Sure, but if you meet up with someone and they later have an audio recording that is completely fabricated from the real audio, there’s nothing for chain of anything. Audio used to be damning evidence and was fairly easily discoverable if it was hacked together to try to sound different. If that goes away, then it just becomes useless as evidence.

    • hypna@lemmy.world
      link
      fedilink
      arrow-up
      7
      arrow-down
      1
      ·
      2 months ago

      It becomes useless as evidence unless you can establish authenticity. It just makes audio recordings more in a class with text documents; perfectly fakeable, but admissible with the right supporting information. So I agree it’s a change, but it’s not the end of audio evidence, and it’s a change in a direction which courts already have experience.

    • GamingChairModel@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      1
      ·
      2 months ago

      You can’t just use an audio file by itself. It has to come from somewhere.

      The courts already have a system in place that if someone seeks to introduce a screenshot of a text message, or a printout of a webpage, or a VHS tape with video, or just a plain audio file, needs to be able to introduce that as evidence, with someone who testifies that it is real and that it is accurate, with an opportunity for others to question and even investigate where it came from and how it was made/stored/copied.

      If I just show up to a car accident case with an audio recording that I claim is the other driver admitting that he forgot to look before turning, that audio is gonna do basically nothing unless and until I show that I had a reason to be making that recording while talking to him, why I didn’t give it to the police who wrote the accident report that day, etc. And even then, the other driver can say “that’s not me and I don’t know what you think that recording is” and we’re still back to a credibility problem.

      We didn’t need AI to do impressions of people. This has always been a problem, or a non-problem, in evidence.