“An intriguing open question is whether the LLM is actually using its internal model of reality to reason about that reality as it solves the robot navigation problem,” says Rinard. “While our results are consistent with the LLM using the model in this way, our experiments are not designed to answer this next question.”

The paper, “Emergent Representations of Program Semantics in Language Models Trained on Programs” can be found here.

Abstract

We present evidence that language models (LMs) of code can learn to represent the formal semantics of programs, despite being trained only to perform next-token prediction. Specifically, we train a Transformer model on a synthetic corpus of programs written in a domain-specific language for navigating 2D grid world environments. Each program in the corpus is preceded by a (partial) specification in the form of several input-output grid world states. Despite providing no further inductive biases, we find that a probing classifier is able to extract increasingly accurate representations of the unobserved, intermediate grid world states from the LM hidden states over the course of training, suggesting the LM acquires an emergent ability to interpret programs in the formal sense. We also develop a novel interventional baseline that enables us to disambiguate what is represented by the LM as opposed to learned by the probe. We anticipate that this technique may be generally applicable to a broad range of semantic probing experiments. In summary, this paper does not propose any new techniques for training LMs of code, but develops an experimental framework for and provides insights into the acquisition and representation of formal semantics in statistical models of code.

  • DominusOfMegadeus@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    edit-2
    3 months ago

    We’re all sick of LLM, but the article is actually a really interesting read. How any of these systems can “understand” anything remains to be sufficiently explained. The abstract is not very indicative of the article content.

    • Hackworth@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      1
      ·
      edit-2
      3 months ago

      From MIT again - our exploration of how LLMs can do the things they can is pointing us in some interesting directions re: our exploration of how our own brains understand.