I’m rather curious to see how the EU’s privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn’t have a paywall)

  • Zeth0s@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    1 year ago

    It’s the same in your brain though. There is no number in your brain. Just a set of synapses that allows a depolarization wave to propagate across neurons, via neurotransmitters released and absorbed in a narrow space.

    The way the brain is built allows you to “remember” stuff, reconstruct information incompletely stored as different, unique connections in a network. But it is not “certain”, we can’t know if it’s the absolute truth. That’s why we need password databases and phone books, because our memory is not a database. It is probably worse than gpt-4

    • Veraticus@lib.lgbt
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      10
      ·
      1 year ago

      It doesn’t matter that there is no literal number in your brain and that there are instead chemical/electronic impulses. There is an impulse there signifying your childhood phone number. You did (and do) know that. And other things too presumably.

      While our brains are not perfectly efficient, we can and do actually store information in them. Information that we can judge as correct or incorrect; true or false; extant or nonexistent.

      LLMs don’t know anything and never knew anything. Their responses are mathematical models of word likelihood.

      They don’t understand English. They don’t know what reality is like or what a phone number represents. If they get your phone number wrong, it isn’t because they “misremembered” or because they’re “uncertain.” It’s because it is literally incapable of retaining a fact. The phone number you asked it for is part of a mathematical model now, and it will return the output of that model, not the correct phone number.

      Conversely, even if you get your phone number wrong, it isn’t because you didn’t know it. It’s because memory is imperfect and degrades over time.

      • Zeth0s@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 year ago

        There no such an impulse, there is a neural network in your brain. These AI stuff were born as a simulation of human neural networks.

        And your neural network cannot tell if something is true or untrue, it might remember a phone number as true even if it is not. English has literally a word for that, that you used: misremembed. It is so common…

        It is true that LLMs do not know in a human way, they do not understand, they cannot tell if what they say is true. But they do retain facts. Ask who won f1 championship in 2001 to chatgpt. It knows it. I have problem remembering correctly, I need to check. Gpt-4 knows better than me that was there. No shame in that

        • Veraticus@lib.lgbt
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          6
          ·
          1 year ago

          You can indeed tell if something is true or untrue. You might be wrong, but that is quite different – you can have internal knowledge that is right or wrong. The very word “misremembered” implies that you did (or even could) know it properly.

          LLMs do not retain facts and they can and frequently do get information wrong.

          Here’s a good test. Choose a video game or TV show you know really well – something a little older and somewhat complicated. Ask ChatGPT about specific plot points in the video game.

          As an example, I know Final Fantasy 14 extremely well and have played it a long time. ChatGPT will confidently state facts about the game that are entirely and totally incorrect: it confuses characters, it moves plot points around. This is because it chooses what is likely to say, not what is actually correct. Indeed, it has no ability to know what is correct at all.

          AI is not a simulation of human neural networks. It uses the concept of mathematical neural networks, but it is a word model, nothing more.

          • fsmacolyte@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            1 year ago

            The free version gets things wrong a bunch. It’s impressive how good GPT-4 is. Human brains are still a million times better in almost every way (they cost a few dollars of energy to operate per day, for example) but it’s really hard to believe how capable the state of the art of LLMs is until you’ve tried it.

            You’re right about one thing though. Humans are able to know things, and to know when we don’t know things. Current LLMs (transformer-based architecture) simply can’t do that yet.