• mke@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    2 months ago

    Except LLMs don’t actually have real reasoning capacity. Hooking in different models that can translate more of the world to text could give the LLM a broader domain, but not an entirely new ability beyond its architecture. That might make it more convincing, but it would still fail in the same ways as it currently does.

    • doodledup@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      18
      ·
      edit-2
      2 months ago

      You’re doing reasoning based on chemical reactions. Who says it can’t do reasoning based on text? Who says it’s not doing that already in some capacity? Can you prove that?

      • mke@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        edit-2
        2 months ago

        If you genuinely think LLMs are anyway capable of even basic reasoning despite all arguments towards the contrary, I honestly don’t want to try convincing you anymore. You’re asking for a miracle out of me—to explain consciousness itself, even—while you can just say “but there’s a chance” even though LLMs can’t get basic facts right.