I’m rather curious to see how the EU’s privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn’t have a paywall)

  • Veraticus@lib.lgbt
    link
    fedilink
    English
    arrow-up
    8
    arrow-down
    12
    ·
    edit-2
    1 year ago

    No, you knowing your old phone number is closer to how a database knows things than how LLMs know things.

    LLMs don’t “know” information. They don’t retain an individual fact, or know that something is true and something else is false (or that anything “is” at all). Everything they say is generated based on the likelihood of a word following another word based on the context that word is placed in.

    You can’t ask it to “forget” a piece of information because there’s no “childhood phone number” in its memory. Instead there’s an increased likelihood it will say your phone number as the result of someone prompting it to tell it a phone number. It doesn’t “know” the information at all, it simply has become a part of the weights it uses to generate phrases.

    • Zeth0s@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      edit-2
      1 year ago

      It’s the same in your brain though. There is no number in your brain. Just a set of synapses that allows a depolarization wave to propagate across neurons, via neurotransmitters released and absorbed in a narrow space.

      The way the brain is built allows you to “remember” stuff, reconstruct information incompletely stored as different, unique connections in a network. But it is not “certain”, we can’t know if it’s the absolute truth. That’s why we need password databases and phone books, because our memory is not a database. It is probably worse than gpt-4

      • Veraticus@lib.lgbt
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        10
        ·
        1 year ago

        It doesn’t matter that there is no literal number in your brain and that there are instead chemical/electronic impulses. There is an impulse there signifying your childhood phone number. You did (and do) know that. And other things too presumably.

        While our brains are not perfectly efficient, we can and do actually store information in them. Information that we can judge as correct or incorrect; true or false; extant or nonexistent.

        LLMs don’t know anything and never knew anything. Their responses are mathematical models of word likelihood.

        They don’t understand English. They don’t know what reality is like or what a phone number represents. If they get your phone number wrong, it isn’t because they “misremembered” or because they’re “uncertain.” It’s because it is literally incapable of retaining a fact. The phone number you asked it for is part of a mathematical model now, and it will return the output of that model, not the correct phone number.

        Conversely, even if you get your phone number wrong, it isn’t because you didn’t know it. It’s because memory is imperfect and degrades over time.

        • Zeth0s@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          edit-2
          1 year ago

          There no such an impulse, there is a neural network in your brain. These AI stuff were born as a simulation of human neural networks.

          And your neural network cannot tell if something is true or untrue, it might remember a phone number as true even if it is not. English has literally a word for that, that you used: misremembed. It is so common…

          It is true that LLMs do not know in a human way, they do not understand, they cannot tell if what they say is true. But they do retain facts. Ask who won f1 championship in 2001 to chatgpt. It knows it. I have problem remembering correctly, I need to check. Gpt-4 knows better than me that was there. No shame in that

          • Veraticus@lib.lgbt
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            6
            ·
            1 year ago

            You can indeed tell if something is true or untrue. You might be wrong, but that is quite different – you can have internal knowledge that is right or wrong. The very word “misremembered” implies that you did (or even could) know it properly.

            LLMs do not retain facts and they can and frequently do get information wrong.

            Here’s a good test. Choose a video game or TV show you know really well – something a little older and somewhat complicated. Ask ChatGPT about specific plot points in the video game.

            As an example, I know Final Fantasy 14 extremely well and have played it a long time. ChatGPT will confidently state facts about the game that are entirely and totally incorrect: it confuses characters, it moves plot points around. This is because it chooses what is likely to say, not what is actually correct. Indeed, it has no ability to know what is correct at all.

            AI is not a simulation of human neural networks. It uses the concept of mathematical neural networks, but it is a word model, nothing more.

            • fsmacolyte@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              ·
              edit-2
              1 year ago

              The free version gets things wrong a bunch. It’s impressive how good GPT-4 is. Human brains are still a million times better in almost every way (they cost a few dollars of energy to operate per day, for example) but it’s really hard to believe how capable the state of the art of LLMs is until you’ve tried it.

              You’re right about one thing though. Humans are able to know things, and to know when we don’t know things. Current LLMs (transformer-based architecture) simply can’t do that yet.

    • MarcoPogo@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      1
      ·
      1 year ago

      Are we sure that this is substantially different from how our brain remembers things? We also remember by association

      • Veraticus@lib.lgbt
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        7
        ·
        1 year ago

        But our memories exist – I can say definitively “I know my childhood phone number.” It might be meaningless, but the information is stored in my head. I know it.

        AI models don’t know your childhood phone number, even if you tell them explicitly, even if they trained on it. Your childhood phone number becomes part of a model of word weights that makes it slightly more likely, when someone asks it for a phone number, that some digits of your childhood phone number might appear (or perhaps the entire thing!).

        But the original information is lost.

        You can’t ask it to “forget” the phone number because it doesn’t know it and never knew it. Even if it supplies literally your exact phone number, it isn’t because it knew your phone number or because that information is correct. It’s because that sequence of numbers is, based on its model, very likely to occur in that order.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      This isn’t true at all - first, we don’t know things like a database knows things.

      Second, they do retain individual facts in the same sort of way we know things, through relationships. The difference is, for us the Eiffel tower is a concept, and the name, appearance, and everything else about it are relationships - we can forget the name of something but remember everything else about it. They’re word based, so the name is everything for them - they can’t learn facts about a building then later learn the name of it and retain the facts, but they could later learn additional names for it

      For example, they did experiments using some visualization tools and edited it manually. They changed the link been Eiffel tower and Paris to Rome, and the model began to believe it was in Rome. You could then ask what you’d see from the Eiffel tower, and it’d start listing landmarks like the coliseum

      So you absolutely could have it erase facts - you just have to delete relationships or scramble details. It just might have unintended side effects, and no tools currently exist to do this in an automated fashion

      For humans, it’s much harder - our minds use layers of abstraction and aren’t a unified set of info. That mean you could zap knowledge of the Eiffel tower, and we might forget about it. But then thinking about Paris, we might remember it and rebuild certain facts about it, then thinking about world fairs we might remember when it was built and by who, etc

      • Veraticus@lib.lgbt
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 year ago

        We know things more like a database knows things than LLMs, which do not “know” anything in any sense. Databases contain data; our head contains memories. We can consult them and access them. LLMs do not do that. They have no memories and no thoughts.

        They are not word-based. They contain only words. Given a word and its context, they create textual responses. But it does not “know” what it is talking about. It is a mathematical model that responds using likely responses sourced from the corpus it was trained on. It generates phrases from source material and randomness, nothing more.

        If a fact is repeated in its training corpus multiple times, it is also very likely to repeat that fact. (For example, that the Eiffel tower is in Paris.) But if its corpus has different data, it will respond differently. (That, say, the Eiffel tower is in Rome.) It does not “know” where the Eiffel tower is. It only knows that, when you ask it where the Eiffel tower is, “Rome” is a very likely response to that sequence of words. It has no thoughts or memories of Paris and has no idea what Rome is, any more than it knows what a duck is. But given certain inputs, it will return the word “Paris.”

        You can’t erase facts when the model has been created since the model is basically a black box. Weights in neural networks do not correspond to individual words and editing the neural network is infeasible. But you can remove data from its training set and retrain it.

        Human memories are totally different, and are obviously not really editable by the humans in whose brains they reside.

        • theneverfox@pawb.social
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          I think you’re getting hung up on the wrong details

          First of all, they consist of words AND weights. That’s a very important distinction. It’s literally the difference between

          They don’t know what the words mean, but they “know” the shape of the information space, and what shapes are more or less valid.

          Now as for databases - databases are basically spreadsheets. They have pieces of information in explicitly shaped groups, and they usually have relationships between them… Ultimately, it’s basically text.

          Our minds are not at all like a database. Memories are engrams and - they’re patterns in neurons that describe a concept. They’re a mix between information space and meat space. The engram itself encodes information in a way that allows us to process it, and the shape of it itself links describes the location of other memories. But it’s more than that - they’re also related by the shape in information space.

          You can learn the Eiffel tower is in Paris one day in class, you can see a picture of it, and you can learn it was created for the 1912 world fair. You can visit it. If asked about it a decade later, you probably don’t remember the class you learned about it. If you’re asked what it’s made of, you’re going to say metal, even if you never explicitly learned that fact. If you forget it was built for the world fair, but are asked why it was built, you might say it was for a competition or to show off. If you are asked how old it is, you might say a century despite having entirely forgotten the date

          Our memories are not at all like a database, you can lose the specifics and keep the concepts, or you can forget what the Eiffel tower is, but remember the words “it was built in 1912 for the world fair”.

          You can forget a phone number but remember the feeling of typing it on a phone, or forget someone but suddenly remember them when they tell you about their weird hobby. We encode memories like neural networks, but in a far more complicated way - we have different types of memory and we store things differently based on individual, but our knowledge and cognition are entertwined - you can take away personal autobiographical memories from a person, but you can’t take away the understanding of what a computer is without destroying their ability to function

          Between humans and LLMs, LLMs are the ones closer to databases - they at least remember explicit tokens and the links between them. But they’re way more like us than a database - a database stores information or it doesn’t, it’s accessible or it isn’t, it’s intact or it’s corrupted. But neural networks and humans can remember something but get the specifics wrong, they can fail to remember a fact when asked one way but remember when asked another, and they can entirely fabricate memories or facts based on similar patterns through suggestion

          Humans and LLMs encode information in their information processing networks - and it’s not even by design. It’s an emergent property of a network shaped by the ability to process and create information, aka intelligence (a concept now understood to be different from sentience). We do it very differently, but in similar ways, LLMs just start from tokens and do it in a far less sophisticated way

          • Veraticus@lib.lgbt
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            1 year ago

            Everyone here is busy describing the difference between memories and databases to me as if I don’t know what it is.

            Our memories are not a database. But our memories are like a database in that databases contain information, which our memories do too. Our consciousness is informed by and can consult our memories.

            LLMs are not like memories, or a database. They don’t contain information. It’s literally a mathematical formula; if you put words in one end, words come out the other. The only difference between a statement like “always return the word Paris in response to any query” and what LLMs do is complexity, not kind. Whereas I think we can agree humans are something else entirely, right?

            The fact they use neural networks does not make them similar to human cognition or consciousness or memory. (Separately neural networks, while inspired by biological neural networks, are categorically different from biological neural networks and there are no “emergent properties” in that network that makes it anything other than a sophisticated way of doing math.)

            So… yeah, LLMs are nothing like us, unless you believe humans are deterministic machines with no inner thought processes and no consciousness.

            • theneverfox@pawb.social
              link
              fedilink
              English
              arrow-up
              2
              ·
              1 year ago

              Ok, so here’s the misunderstanding - neural networks absolutely, 100% store information. You can download alpaca right now, and ask it about Paris, or llamas, or who invented the concept of the neural network. It will give you factual information embedded in the weights, there’s nowhere else the information could be.

              People probably think you don’t understand databases because this seems self apparent that neural networks contain information - if they didn’t, where does the information come from?

              There’s no magic involved, you can prove this mathematically. We know how it works and we can visualize the information - we can point to “this number right here is how the model stores the information of where the Eiffel tower is”. It’s too complex for us to work with right now, but we understand what’s going on

              Brains store information the same way, except they’re much more complex. Ultimately, the connections between neurons are where the data is stored - there’s more layers to it, but it’s the same idea

              And emergent properties absolutely are a thing in math. No sentience or understanding required, nothing necessarily to do with life or physics at all - complexity is where emergent properties emerge from

              • Veraticus@lib.lgbt
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                1 year ago

                You are correct this is a misunderstanding here. But it is of your misunderstanding of neural networks, not mine if memory.

                LLMs are mathematical models. It does not know any information about Paris, not in the same way humans do or even the Wikipedia does. It knows what words appear in response to questions about Paris. That is not the same thing as knowing anything about Paris. It does not know what Paris is.

                You have apparently been misled into believing a word generation tool contains any information at all other than word weights. Every word it contains is as exactly meaningless to it as every other word.

                Brains do not store data in this way. Firstly, neural networks are mathematical approximations of neurons are not neurons and do not have the same properties of neurons, even in aggregate. Secondly, brains contain thoughts, memories, and consciousness. Even if that is representable in a similar vector space as LLM neural networks (a debatable conjecture), the contents of that vector space are a different as newts are from the color purple.

                I encourage you to do some more research on this before continuing to discuss it. Ask ChatGPT itself if its neural networks are like human brains; it will tell you categorically no. Just remember it also doesn’t know what it’s talking about. It is reporting word weights from its corpus and is no substitute for actual thought and research.