• funkless_eck@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    9
    ·
    7 months ago

    “ooh it’s more advanced but don’t worry- it’s not conscious”

    is as much a marketing tactic as “how it feels to chew 5 gum” or buzzfeedesque “top 10 celebrity mistakes - number 3 will blow your mind”

    it’s a tech product that runs a series of complicated loops against a large series of texts and returns the closest comparison, as it stands it’s never going to be dangerous in and of itself.

    • Thorny_Insight@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      Generative AI and LLMs is not what people mean when they’re talking about the dangers of AI. What we worry about doesn’t exist yet.

      • hikaru755@feddit.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        I mean… It might be. Just depends on how much potential there still is to get models up to higher reasoning capabilities, and I don’t think anyone really knows that yet

        • Thorny_Insight@lemm.ee
          link
          fedilink
          English
          arrow-up
          3
          ·
          7 months ago

          Yeah maybe. I just personally don’t think LLMs are actually intelligent. They’re just capable of faking intelligence but at the same time making errors that perfectly indicate that it’s basically just bluffing. I’d be more worried about an AI that knows less things but demonstrates higer capability for logic and reasoning.

      • funkless_eck@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        7 months ago

        I dont think AI sentience as danger is going to be an issue in our lifetimes - we’re 123 years in January since the first well known story featuring this trope (Karel Čapek’s Rossumovi Univerzáiní Robotī)

        We are a long way off from being able to copy virtual perception, action and unified agency of even basic organisms right now.

        Therefore all claims about the “dangers” of AI are only dangers of humans using the tool (akin to the dangers of driving a car vs the dangers of cars attacking their owners without human interaction) and thus are just marketing hyperbole

        in my opinion of course

        • Thorny_Insight@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          Well yeah perhaps, but isn’t that kind of like knowing that an asteroid is heading towards earth and feeling no urgency about it? There’s non-zero chance that we’ll create AGI withing the next couple years. The chances may be low but consequences have the potential to literally end humanity - or worse.

    • kromem@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      7 months ago

      it’s a tech product that runs a series of complicated loops against a large series of texts and returns the closest comparison, as it stands it’s never going to be dangerous in and of itself.

      That’s not how it works. I really don’t get what’s with people these days being so willing to be confidently incorrect. It’s like after the pandemic people just decided that if everyone else was spewing BS from their “gut feelings,” well gosh darnit they could too!

      It uses gradient descent on a large series of texts to build a neural network capable of predicting those texts as accurately as possible.

      How that network actually operates ends up a black box, especially for larger models.

      But research over the past year and a half in simpler toy models has found that there’s a rather extensive degree of abstraction. For example, a small GPT trained only on legal Othello or Chess moves ends up building a virtual representation of the board and tracks “my pieces” and “opponent pieces” on it, despite never being fed anything that directly describes the board or the concept of ‘mine’ vs ‘other’. In fact, in the Chess model, the research found there was even a single vector in the neural network that could be flipped to have the model play well or play like shit regardless of the surrounding moves fed in.

      It’s fairly different from what you seem to think it is. Though I suspect that’s not going to matter to you in the least, as I’ve come to find that explaining transformers to people spouting misinformation about them online has about the same result as a few years ago explaining vaccine research to people spouting misinformation about that.

      • funkless_eck@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        7 months ago

        I dont know if saying “it’s not a loop! it’s an iterative process using a series of steps!” is that much of a burn.

        my dude, that’s a loop.

        • Chakravanti@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          7 months ago

          Well He That Remains came by just to show that everything we experience is always part of a bigger loop. You can fucking kill him and even slam the break; crash to his design of the the highest number of alternate dimensions and then some and it won’t stop the loop. 99.99% of the time he’ll be back. We only need to consciously accept the concept of no more than the notion to summon his return. Even if we were to successfully crack the time management mech and undo his manipulation, he’ll be back when we track him down to build another one.

          The Loop is more nature than matter to energy combined. When everything in all of reality would expand infinitely far apart, the whole shebang goes lateral mirror again with a whole new dimension. There is no end to any aspect of reality. Anywhere it would be, turns out it’s “just” “another” Loop Mirror.