When German journalist Martin Bernklautyped his name and location into Microsoft’s Copilot to see how his articles would be picked up by the chatbot, the answers horrified him. Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers. For years, Bernklau had served as a courts reporter and the AI chatbot had falsely blamed him for the crimes whose trials he had covered.

The accusations against Bernklau weren’t true, of course, and are examples of generative AI’s “hallucinations.” These are inaccurate or nonsensical responses to a prompt provided by the user, and they’re alarmingly common. Anyone attempting to use AI should always proceed with great caution, because information from such systems needs validation and verification by humans before it can be trusted.

But why did Copilot hallucinate these terrible and false accusations?

    • wintermute@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      45
      ·
      3 months ago

      Exactly. LLMs don’t understand semantically what the data means, it’s just how often some words appear close to others.

      Of course this is oversimplified, but that’s the main idea.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        14
        ·
        edit-2
        3 months ago

        no need for that subjective stuff. The objective explanation is very simple. The output of the llm is sampled using a random process. A loaded die with probabilities according to the llm’s output. It’s as simple as that. There is literally a random element that is both not part of the llm itself, yet required for its output to be of any use whatsoever.

    • Zeek@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Not really. The purpose of the transformer architecture was to get around this limitation through the use of attention heads. Copilot or any other modern LLM has this capability.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        3 months ago

        The llm does not give you the next token. It gives you a probability distribution of what the next token coould be. Then, after the llm, that probability distribution is randomly sampled.

        You could add billions of attention heads, it will still have an element of randomness in the end. Copilot or any other llm (past, present or future) do have this problem too. They all “hallucinate” (have a random element in choosing the next token)

        • Terrasque@infosec.pub
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 months ago

          randomly sampled.

          Semi-randomly. There’s a lot of sampling strategies. For example temperature, top-K, top-p, min-p, mirostat, repetition penalty, greedy…

          • futatorius@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            Semi-randomly

            A more correct term is constrained randomness. You’re still looking at probability distribution functions, but they’re more complex than just a throw of the dice.

          • vrighter@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            randomly doesn’t mean equiprobable. If you’re sampling a probability distribution, it’s random. Temperature 0 is never used, otherwise a lot of stuff would consistently hallucinate the exact same thing

            • Terrasque@infosec.pub
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              Temperature 0 is never used

              It is in some cases, where you want a deterministic / “best” response. Seen it used in benchmarks, or when doing some “Is this comment X?” where X is positive, negative, spam, and so on. You don’t want the model to get creative there, but rather answer consistently and always the most likely path.

    • Rivalarrival@lemmy.today
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      28
      ·
      3 months ago

      It’s a solveable problem. AI is currently at a stage of development equivalent to a 2-year-old, just with better grammar. Everything it is doing now is mimicry and babbling.

      It needs to feed it’s own interactions right back into it’s training data. To become a better and better mimic. Eventually, the mechanism it uses to select the appropriate data to form a response will become more and more sophisticated, and it will hallucinate less and less. Eventually, it’s hallucinations will be seen as “insightful” rather than wild ass guesses.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        21
        arrow-down
        1
        ·
        3 months ago

        also, what you described has already been studied. Training an llm its own output completely destroys it, not makes it better.

        • linearchaos@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          14
          ·
          3 months ago

          This is incorrect or perhaps updated. Generating new data, using a different AI method to tag that data, and then training on that data is definitely a thing.

          • vrighter@discuss.tchncs.de
            link
            fedilink
            English
            arrow-up
            14
            arrow-down
            1
            ·
            edit-2
            3 months ago

            yes it is, and it doesn’t work.

            edit: too expand, if you’re generating data it’s an estimation. The network will learn the same biases and make the same mistakes and assumtlptions you did when enerating the data. Also, outliers won’t be in the set (because you didn’t know about them, so the network never sees any)

            • Terrasque@infosec.pub
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              Microsoft’s Dolphin and phi models have used this successfully, and there’s some evidence that all newer models use big LLM’s to produce synthetic data (Like when asked, answering it’s ChatGPT or Claude, hinting that at least some of the dataset comes from those models).

              • vrighter@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                7
                ·
                3 months ago

                from their own site:

                Alpaca also exhibits several common deficiencies of language models, including hallucination, toxicity, and stereotypes. Hallucination in particular seems to be a common failure mode for Alpaca, even compared to text-davinci-003.

            • Rivalarrival@lemmy.today
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              5
              ·
              edit-2
              3 months ago

              It needs to be retrained on the responses it receives from it’s conversation partner. It’s previous output provides context for it’s partner’s responses.

              It recognizes when it is told that it is wrong. It is fed data that certain outputs often invite “you’re wrong” feedback from it’s partners, and it is instructed to minimize such feedback.

              It is not (yet) developing true intelligence. It is simply learning to bias it’s responses in such a way that it’s audience doesn’t immediately call it a liar.

              • vrighter@discuss.tchncs.de
                link
                fedilink
                English
                arrow-up
                8
                ·
                3 months ago

                Yeah that implies that the other network(s) can tell right from wrong. Which they can’t. Because if they did the problem wouldn’t need solving.

                • Rivalarrival@lemmy.today
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  6
                  ·
                  3 months ago

                  What other networks?

                  It currently recognizes when it is told it is wrong: it is told to apologize to it’s conversation partner and to provide a different response. It doesn’t need another network to tell it right from wrong. It needs access to the previous sessions where humans gave it that information.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        1
        ·
        edit-2
        3 months ago

        The outputs of the nn are sampled using a random process. Probability distribution is decided by the llm, loaded die comes after the llm. No, it’s not solvable. Not with LLMs. not now, not ever.

      • linearchaos@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        19
        ·
        3 months ago

        Good luck being pro AI here. Regardless of the fact that they could just put a post on the prompt that says The writer of this document was not responsible for the act they are just writing about it and it would not frame them as the perpetrator.

        • Hacksaw@lemmy.ca
          link
          fedilink
          English
          arrow-up
          17
          arrow-down
          3
          ·
          3 months ago

          If you already know the answer you can tell the AI the answer as part of the question and it’ll give you the right answer.

          That’s what you sound like.

          AI people are as annoying as the Musk crowd.

          • futatorius@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            I’m no AI fanboy, but what you just described was the feedback cycle during training.

          • linearchaos@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            20
            ·
            3 months ago

            You know what, don’t bother responding back to me I’m just blocking you now, before you decide to drag some more of that tired right wing bullshit that you used to fight with everyone else with, none of your arguments on here are worth anyone even reading so I’m not going to waste my time and responding to anything or reading anything from you ever again.

          • linearchaos@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            20
            ·
            3 months ago

            How helpful of you to tell me what I’m saying, especially when you reframe my argument to support yourself.

            That’s not what I said. Why would you even think that’s what I said.

            Before you start telling me what I sound like, you should probably try to stop sounding like an impetuous child.

            Every other post from you is dude or LMAO. How do you expect anyone to take anything you post seriously?

        • vrighter@discuss.tchncs.de
          link
          fedilink
          English
          arrow-up
          12
          arrow-down
          1
          ·
          3 months ago

          the problem isn’t being pro ai. It’s people puling ai supposed ai capabilities out of their asses without having actually looked at a single line of code. This is obvious to anyone who has coded a neural network. Yes even to openai themselves, but if they let you believe that, then the money stops flowing. You simply can’t get an 8-ball to give the correct answer consistently. Because it’s fundamentally random.

  • rsuri@lemmy.world
    link
    fedilink
    English
    arrow-up
    44
    arrow-down
    3
    ·
    3 months ago

    “Hallucinations” is the wrong word. To the LLM there’s no difference between reality and “hallucinations”, because it has no concept of reality or what’s true and false. All it knows it what word maybe should come next. The “hallucination” only exists in the mind of the reader. The LLM did exactly what it was supposed to.

    • Hobo@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      4
      ·
      edit-2
      3 months ago

      They’re bugs. Major ones. Fundamental flaws in the program. People with a vested interest in “AI” rebranded them as hallucinations in order to downplay the fact that they have a major bug in their software and they have no fucking clue how to fix it.

      • Terrasque@infosec.pub
        link
        fedilink
        English
        arrow-up
        10
        ·
        3 months ago

        It’s an inherent negative property of the way they work. It’s a problem, but not a bug any more than the result of a car hitting a tree at high speed is a bug.

        Calling it a bug indicates that it’s something unexpected that can be fixed, and as far as we know it can’t be fixed, and is expected behavior. Same as the car analogy.

        The only thing we can do is raise awareness and mitigate.

        • futatorius@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          It’s a problem, but not a bug any more than the result of a car hitting a tree at high speed is a bug.

          You’re attempting to redefine “bug.”

          Software bugs are faults, flaws, or errors in computer software that result in unexpected or unanticipated outcomes. They may appear in various ways, including undesired behavior, system crashes or freezes, or erroneous and insufficient output.

          From a software testing point of view, a correctly coded realization of an erroneous algorithm is a defect (a bug). It fails validation (a test for fitness for use) rather than verification (a test that the code correctly implements the erroneous algorithm).

          This kind of issue arises not only with LLMs, but with any software that includes some kind of model within it. The provably correct realization of a crap model is still crap.

      • SkunkWorkz@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        arrow-down
        3
        ·
        edit-2
        3 months ago

        It’s not a bug. Just a negative side effect of the algorithm. This what happens when the LLM doesn’t have enough data points to answer the prompt correctly.

        It can’t be programmed out like a bug, but rather a human needs to intervene and flag the answer as false or the LLM needs more data to train. Those dozens of articles this guy wrote aren’t enough for the LLM to get that he’s just a reporter. The LLM needs data that explicitly says that this guy is a reporter that reported on those trials. And since no reporter starts their articles with ”Hi I’m John Smith the reporter and today I’m reporting on…” that data is missing. LLMs can’t make conclusions from the context.

    • Terrasque@infosec.pub
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 months ago

      Well, It’s not lying because the AI doesn’t know right or wrong. It doesn’t know that it’s wrong. It doesn’t have the concept of right or wrong or true or false.

      For the llm’s the hallucinations are just a result of combining statistics and producing the next word, as you say. From the llm’s “pov” it’s as real as everything else it knows.

      So what else can it be called? The closest concept we have is when the mind hallucinates.

  • kent_eh@lemmy.ca
    link
    fedilink
    English
    arrow-up
    38
    ·
    3 months ago

    Copilot’s results asserted that Bernklau was an escapee from a psychiatric institution, a convicted child abuser, and a conman preying on widowers.

    Stephen King is going to be in big trouble if these AI thingies notice him.

  • Ilovethebomb@lemm.ee
    link
    fedilink
    English
    arrow-up
    40
    arrow-down
    2
    ·
    3 months ago

    I’d love to see more AI providers getting sued for the blatantly wrong information their models spit out.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      25
      ·
      3 months ago

      I don’t think they should be liable for what their text generator generates. I think people should stop treating it like gospel. At most, they should be liable for misrepresenting what it can do.

      • RvTV95XBeo@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        47
        ·
        3 months ago

        If these companies are marketing their AI as being able to provide “answers” to your questions they should be liable for any libel they produce.

        If they market it as “come have our letter generator give you statistically associated collections of letters to your prompt” then I guess they’re in the clear.

      • TheFriar@lemm.ee
        link
        fedilink
        English
        arrow-up
        25
        arrow-down
        2
        ·
        3 months ago

        So you don’t think these massive megacompanies should be held responsible for making disinformation machines? Why not?

        • futatorius@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          Yeah, all these systems do is worsen the already bad signal/noise ratio in online discourse.

          • medgremlin@midwest.social
            link
            fedilink
            English
            arrow-up
            5
            ·
            3 months ago

            Which is why, in many cases, there should be liability assigned. If a self-driving car kills someone, the programming of the car is at least partially to blame, and the company that made it should be liable for the wrongful death suit, and probably for criminal charges as well. Citizens United already determined that corporations are people…now we just need to put a corporation in prison for their crimes.

            • futatorius@lemm.ee
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              edit-2
              3 months ago

              If a self-driving car kills someone, the programming of the car is at least partially to blame

              No, it is not. It is the use to which the system has been put that is the point at which blame can be assigned. That is what should be verified and validated. That’s where some person is signing on the dotted line that the system is fit for use for that particular purpose.

              I can write a simplistic algorithm to guide a toy drone autonomously. So let’s say I GPL it. If an airplane manufacturer then drops that code into an airliner, and fail to test it correctly in scenarios resembling real-life use of that plane, they’re the ones who fucked up, not me.

          • futatorius@lemm.ee
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 months ago

            No liability should apply while coding. When that code is deployed for use, there should be liability if it is unfit for its intended use. If your AI falsely denies my insurance claim, your ass should be on the line.

      • Ilovethebomb@lemm.ee
        link
        fedilink
        English
        arrow-up
        14
        ·
        3 months ago

        I want them to have more warnings and disclaimers than a pack of cigarettes. Make sure the users are very much aware they can’t trust anything it says.

      • Stopthatgirl7@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        1
        ·
        3 months ago

        If they aren’t liable for what their product does, who is? And do you think they’ll be incentivized to fix their glorified chat boxes if they know they won’t be held responsible for if?

        • futatorius@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          3 months ago

          If they aren’t liable for what their product does, who is?

          The users who claim it’s fit for the purpose they are using it for. Now if the manufacturers themselves are making dodgy claims, that should stick to them too.

        • lunarul@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          14
          ·
          3 months ago

          Their product doesn’t claim to be a source of facts. It’s a generator of human-sounding text. It’s great for that purpose and they’re not liable for people misusing it or not understanding what it does.

          • Stopthatgirl7@lemmy.worldOP
            link
            fedilink
            English
            arrow-up
            12
            arrow-down
            1
            ·
            edit-2
            3 months ago

            So you think these companies should have no liability for the misinformation they spit out. Awesome. That’s gonna end well. Welcome to digital snake oil, y’all.

            • lunarul@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              arrow-down
              5
              ·
              3 months ago

              I did not say companies should have no liability for publishing misinformation. Of course if someone uses AI to generate misinformation and tries to pass it off as factual information they should be held accountable. But it doesn’t seem like anyone did that in this case. Just a journalist putting his name in the AI to see what it generates. Nobody actually spread those results as fact.

      • kibiz0r@midwest.social
        link
        fedilink
        English
        arrow-up
        7
        ·
        3 months ago

        If we’ve learned any lesson from the internet, it’s that once something exists it never goes away.

        Sure, people shouldn’t believe the output of their prompt. But if you’re generating that output, a site can use the API to generate a similar output for a similar request. A bot can generate it and post it to social media.

        Yeah, don’t trust the first source you see. But if the search results are slowly being colonized by AI slop, it gets to a point where the signal-to-noise ratio is so poor it stops making sense to only blame the poor discernment of those trying to find the signal.

      • futatorius@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        3 months ago

        Unless there is a huge disclaimer before every interaction saying “THIS SYSTEM OUTPUTS BOLLOCKS!” then it’s not good enough. And any commercial enterprise that represents any AI-generated customer interaction as factual or correct should be held legally accountable for making that claim.

        There are probably already cases where AI is being used for life-and-limb decisions, probably with a do-nothing human rubber stamp in the loop to give plausible deniability. People will be maimed and killed by these decisions.

  • Broken@lemmy.ml
    link
    fedilink
    English
    arrow-up
    34
    arrow-down
    1
    ·
    3 months ago

    This sounds like a great movie.

    AI sends police after him because of things he wrote. Writer is on the run, trying to clear his name the entire time. Somehow gets to broadcast the source of the articles to the world to clear his name. Plot twist ending is that he was indeed the perpetrator behind all the crimes.

  • Brutticus@lemm.ee
    link
    fedilink
    English
    arrow-up
    31
    ·
    3 months ago

    “This guys name keeps showing up all over this case file” “Thats because he’s the victim!”

  • tiramichu@lemm.ee
    link
    fedilink
    English
    arrow-up
    25
    ·
    3 months ago

    The worrying truth is that we are all going to be subject to these sorts of false correlations and biases and there will be very little we can do about it.

    You go to buy car insurance, and find that your premium has gone up 200% for no reason. Why? Because the AI said so. Maybe soneone with your name was in a crash. Maybe you parked overnight at the same GPS location where an accident happened. Who knows what data actually underlies that decision or how it was made, but it was. And even the insurance company themselves doesn’t know how it ended up that way.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 months ago

      We’re already there, no AI needed. Rates are all generated by computer. Ask your agent why your rate went up and they’ll say “idk computer said so”.

      • futatorius@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        3 months ago

        Someone, somewhere along the line, almost certainly coded rate(2025) = 2*rate(2024). And someone approved that going into production.

  • gcheliotis@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    5
    ·
    3 months ago

    The AI did not “decide” anything. It has no will. And no understanding of the consequences of any particular “decision”. But I guess “probabilistic model produces erroneous output” wouldn’t get as many views. The same point could still be made about not placing too much trust on the output of such models. Let’s stop supporting this weird anthropomorphizing of LLMs. In fact we should probably become much more discerning in using the term “AI”, because it alludes to a general intelligence akin to human intelligence with all the paraphernalia of humanity: consciousness, will, emotions, morality, sociality, duplicity, etc.

    • Hello Hotel@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      2
      ·
      edit-2
      3 months ago

      the AI “decided” in the same way the dice “decided” to land on 6 and 4 and screw me over. the system made a result using logic and entropy. With AI, some people are just using this informal way of speaking (subconsciously anthropomorphising) while others look at it and genuinely beleave or want to pretend its alive. You can never really know without asking them directly.

      Yes, if the intent is confusion, it is pretty minipulative.

      • gcheliotis@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 months ago

        Granted, our tendency towards anthropomorphism is near ubiquitous. But it would be disingenuous to claim that it does not play out in very specific and very important ways in how we speak and think about LLMs, given that they are capable of producing very convincing imitations of human behavior. And as such also produce a very convincing impression of agency. As if they actually do decide things. Very much unlike dice.

        • Hello Hotel@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          A doll is also designed to be anthropomorphised, to have life projected onto it. Unlike dolls, when someone talks about LLMs as alive, most people have no clue if they are pretending or not. (And marketers take advantage of it!) We are feed a culture that accedentially says “chatGPT + Boston Dynamics robot = Robocop”. Assuming the only fictional part is that we dont have the ability to make it, not that the thing we create wouldn’t be human (or even be need to be human).

  • Queen HawlSera@lemm.ee
    link
    fedilink
    English
    arrow-up
    21
    arrow-down
    6
    ·
    3 months ago

    It’s a fucking Chinese Room, Real AI is not possible. We don’t know what makes humans think, so of course we can’t make machines do it.

    • stingpie@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 months ago

      I don’t think the Chinese room is a good analogy for this. The Chinese room has a conscious person at the center. A better analogy might be a book with a phrase-to-number conversion table, a couple number-to-number conversion tables, and finally a number-to-word conversion table. That would probably capture transformer’s rigid and unthinking associations better.

  • n0m4n@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    ·
    3 months ago

    If this were some fiction plot, Copilot reasoned the plot twist, and ran with it. Instead of the butler, the writer did it. To the computer, these are about the same.

  • erenkoylu@lemmy.ml
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    8
    ·
    3 months ago

    The problem is not the AI. The problem is the huge numbers of morons who deploy AI without proper verfication and control.

    • Cethin@lemmy.zip
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      3
      ·
      3 months ago

      Sure, and also people using it without knowing that it’s glorifies text completion. It finds patterns, and that’s mostly it. If your task involves pattern recognition then it’s a great tool. If it requires novel thought, intelligence, or the synthesis of information, then you probably need something else.

    • futatorius@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Yeah, just like the thousands or millions of failed IT projects. AI is just a new weapon you can use to shoot yourself in the foot.

  • Soup@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    2
    ·
    3 months ago

    And yet here we’re are, praising this garbage for its ability to perform simple tasks and take jobs from artists and entertainers.

    • stingpie@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 months ago

      No, you’re thinking of the first scene of the movie where a fly falls into the teletype machine and causes it to type ‘tuttle’ instead of ‘buttle’.

      • Blackmist@feddit.uk
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 months ago

        It’s not my fault that Buttle’s heart condition didn’t appear on Tuttle’s file!