“An intriguing open question is whether the LLM is actually using its internal model of reality to reason about that reality as it solves the robot navigation problem,” says Rinard. “While our results are consistent with the LLM using the model in this way, our experiments are not designed to answer this next question.”

The paper, “Emergent Representations of Program Semantics in Language Models Trained on Programs” can be found here.

Abstract

We present evidence that language models (LMs) of code can learn to represent the formal semantics of programs, despite being trained only to perform next-token prediction. Specifically, we train a Transformer model on a synthetic corpus of programs written in a domain-specific language for navigating 2D grid world environments. Each program in the corpus is preceded by a (partial) specification in the form of several input-output grid world states. Despite providing no further inductive biases, we find that a probing classifier is able to extract increasingly accurate representations of the unobserved, intermediate grid world states from the LM hidden states over the course of training, suggesting the LM acquires an emergent ability to interpret programs in the formal sense. We also develop a novel interventional baseline that enables us to disambiguate what is represented by the LM as opposed to learned by the probe. We anticipate that this technique may be generally applicable to a broad range of semantic probing experiments. In summary, this paper does not propose any new techniques for training LMs of code, but develops an experimental framework for and provides insights into the acquisition and representation of formal semantics in statistical models of code.

  • Deceptichum@quokk.au
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    6
    ·
    3 months ago

    Do I “think” or does my brain pick the closest neuron and spit out a function based on that input?

    If we could recreate the universe, would I do the exact same thing in the exact same situation?

    • atrielienz@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      3 months ago

      I’m sorry. Because you don’t understand how your brain works you’re suggesting that it must work in the same way as something a similar brain created because you don’t know how either thing works. That’s not an argument.

      • Deceptichum@quokk.au
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        6
        ·
        edit-2
        3 months ago

        No, I’m not suggesting that.

        I’m suggesting that if we don’t even understand how consciousness works for ourselves, we cannot make claims about how it will look for other things.

        Deterministically free will does not exist, if we cannot exercise free will we cannot have independent thoughts just the same as a machine.

        Truth is we don’t really know shit, we’re biological machines that are able to think they’re in control of themselves based on inputs. If we ever discover true AGI it will be on accident as we fiddle with technologies such as LLMs or any other complex models.

        • atrielienz@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          3 months ago

          Okay. Feed a new species that hasn’t been named yet into an LLM. Does it name that new creature? Can it decide what family or phylum etc it belongs? Does it pick up the specific attributes of that new species?

          • Deceptichum@quokk.au
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            3
            ·
            edit-2
            3 months ago

            It might be able to pick those things out, I certainly couldn’t.

            Edit: So ChatGPT correctly identified a new species from 4 days ago as a type of Storm Petrel and a new flower from Sri Lanka as an Orchidaceae. Far better than I could do.

            • atrielienz@lemmy.world
              link
              fedilink
              English
              arrow-up
              5
              ·
              3 months ago

              That is very deliberately not in the spirit of the question I asked. It’s almost like you’re intent on misunderstanding on purpose just so you can feel like you’re right.

              • Deceptichum@quokk.au
                link
                fedilink
                English
                arrow-up
                2
                arrow-down
                2
                ·
                3 months ago

                You asked if it could do I task I wasn’t even capable of doing, and this was your assessment of consciousness.

                • atrielienz@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  3
                  ·
                  edit-2
                  3 months ago

                  No. I asked if it had been given an unclassified un-named species. Not something someone else just discovered and has already parsed information on. And the point is humans can and do do this, have done it for centuries with the right training as those systems we use for classification have been dialed in.

                  The model has the information on how to classify. It can be added to with scraped data from the internet. But it does not do the same things a trained individual does to classify and name a new species. Because it is not capable of that.

                  • Deceptichum@quokk.au
                    link
                    fedilink
                    English
                    arrow-up
                    2
                    arrow-down
                    1
                    ·
                    edit-2
                    3 months ago

                    The information from 4 days was not parsed on, that’s why I chose something so recent.

                    And LLM can be trained to do this. Literally when it looked at the Petrel it did things humans do such as take note of the dark colours common in seabirds, the small size, etc. and it used those points to reach its conclusion.

                    We don’t do anything special as humans, we take in data, process it, and spit out a result. It’s why a child has to be taught basic concepts such as creativity or socialising.

        • tabular@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          3 months ago

          I suspect others are talking about “thinking” only objectively.

          B) If a LLM had a subjective experience when given input presumably it has none when all processes are stopped (subjectively, unverifiable).

          A) If a LLM has no input then there are no processes going on at all which could be described as thinking (objectively, verifiable: what is the program doing).