• doodledup@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    24
    ·
    edit-2
    3 months ago

    You’re doing that too from day one you were born.

    Besides, aren’t humans thinking in words too?

    Why is it impossible to build a text-based AGI model? Maybe there can be reasoning in between word predictions. Maybe reasoning is just a fancy term for statistics? Maybe floating-point rounding errors are sufficient for making it more than a mere token prediction model.

    • TimeSquirrel@kbin.melroy.org
      link
      fedilink
      arrow-up
      9
      ·
      3 months ago

      Besides, aren’t humans thinking in words too?

      Not all the time. I can think about abstract concepts with no language needed whatsoever. Like when I’m working on my car. I don’t need to think to myself “Ah this bolt is the 10mm one that went on the steering pump”, I just recognize it and put it on.

      Programming is another area like that. I just think about a particular concept itself. How the data will flow, what a function will do to it, etc. It doesn’t need to be described in my head with language to know it and understand it. LLMs cannot do that.

      A toddler doesn’t need to understand language to build a cool house out of Lego.

      • Petter1@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        3 months ago

        Well, you just have to give the LLM (or better said to a general machine learning Algorithm) a body with Vision and arms as well as a way to train in that body

        I’d say that would look like AGI

        The key is more efficient training algorithms that don’t need a whole server centre to train 😇I guess we will see in the future if this works

    • nucleative@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      3 months ago

      This poster asked some questions in good faith, I don’t understand the downvotes when there’s a legitimate contribution to the conversation because that stifles other contributions.