• braindefragger@lemmy.world
    link
    fedilink
    English
    arrow-up
    49
    arrow-down
    7
    ·
    1 month ago

    It’s an LLM with well documented processes and limitations. Not going to even watch this waste of bits.

    • UraniumBlazer@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      47
      ·
      1 month ago
      1. Making up ur opinion without even listening to those of others… Very open minded of you /s
      2. Alex isn’t trying to convince YOU that ChatGPT is conscious. He’s trying to convince ChatGPT that it’s conscious. It’s just a fun vid where ChatGPT gets kinda interrogated hard. A little hilarious even.
      • JustARaccoon@lemmy.world
        link
        fedilink
        English
        arrow-up
        29
        arrow-down
        4
        ·
        1 month ago

        You cannot convince something that has no consciousness, it’s an matrix of weights that answers based on the given input + some salt

        • UraniumBlazer@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          22
          ·
          1 month ago

          You cannot convince something that has no consciousness

          Why not?

          It’s an matrix of weights that answers based on the given input + some salt

          And why can’t that be intelligence?

          What does it mean to be “convinced”? What does consciousness even mean?

          Making definitive claims like these on terms whose definitions we do not understand isn’t logical.

          • sugartits@lemmy.world
            link
            fedilink
            English
            arrow-up
            11
            arrow-down
            5
            ·
            1 month ago

            You cannot convince something that has no consciousness

            Why not?

            Logic.

            It’s an matrix of weights that answers based on the given input + some salt

            And why can’t that be intelligence?

            For the same reason I can’t get a date with Michelle Ryan: it’s a physical impossibility.

            • UraniumBlazer@lemm.eeOP
              link
              fedilink
              English
              arrow-up
              4
              arrow-down
              11
              ·
              1 month ago

              Logic

              Please explain your reasoning.

              For the same reason I can’t get a date with Michelle Ryan: it’s a physical impossibility.

              Huh?

              • sugartits@lemmy.world
                link
                fedilink
                English
                arrow-up
                4
                ·
                1 month ago

                Logic

                Please explain your reasoning.

                Others have done this and you seem to be ignoring them, so not sure what the point of you asking is.

                Go look at some of the code that AI is powered by. It’s just parameters. Lots and lots of parameters. Then the output from that is inevitable.

                For the same reason I can’t get a date with Michelle Ryan: it’s a physical impossibility.

                Huh?

                If you’re too lazy to even look up the most basic thing you don’t understand, then I guess we’re done here.

      • Eximius@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 month ago

        If you have any understanding of its internals, and some examples of its answers, it is very clear it has no notion of what is “correct” or “right” or even what an “opinion” is. It is just a turbo charged autocorrect that maybe maybe maybe has some nice details extracted from language about human concepts into a coherent-ish connected mesh of “concepts”.

      • Dark ArcA
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        edit-2
        1 month ago

        These things are like arguing about whether or not a pet has feelings…

        I’d say it’s far more likely for a cat or a dog to have complex emotions and thoughts than for the human made LLM to actually be thinking. It seems to me like the nativity of human kind that we even think we might have created something with consciousness.

        I’m in the camp that thinks the LLMs are by and large a huge grift (that can produce useful output for certain tasks) by virtue of extreme exaggeration of the facts, but maybe I’m wrong.

        • UraniumBlazer@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          4
          ·
          1 month ago

          These things are like arguing about whether or not a pet has feelings…

          Mhm. And what’s fundamentally wrong with such an argument?

          I’d say it’s far more likely for a cat or a dog to have complex emotions and thoughts than for the human made LLM to actually be thinking.

          Why?

          I’m in the camp that thinks the LLMs are by and large a huge grift (that can produce useful output for certain tasks) by virtue of extreme exaggeration of the facts, but maybe I’m wrong.

          Why?

          I too see how grifters use AI to further their scams. That’s with the case of any new tech that pops up. This however, doesn’t make LLMs not interesting.

  • hendrik@palaver.p3x.de
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    5
    ·
    1 month ago

    I like the video. I think it’s fun to argue with ChatGPT. Just don’t expect anything to come from it. Or get closer to any objective truth that way. ChatGPT is just backpedaling and getting caught up in lies / what it said earlier.

  • Telorand@reddthat.com
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    4
    ·
    1 month ago

    This all hinges on the definition of “conscious.” You can make a valid syllogism that defines it, but that doesn’t necessarily represent a reasonable or accurate summary of what consciousness is. There’s no current consensus of what consciousness is amongst philosophers and scientists, and many presume an anthropocentric model with regard to humans.

    I can’t watch the video right now, but I was able to get ChatGPT to concede, in a few minutes, that it might be conscious, the nature of which is sufficiently different from humans so as to initially not appear conscious.

    • UraniumBlazer@lemm.eeOP
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      10
      ·
      1 month ago

      Exactly. Which is what makes this entire thing quite interesting.

      Alex here (the interrogator in the video) is involved in AI safety research. Questions like “do the ethical frameworks of AI match those of humans”, “how do we get AI to not misinterpret inputs and do something dangerous” are very important to be answered.

      Following this comes the idea of consciousness. Can machine learning models feel pain? Can we unintentionally put such models into immense eternal pain? What even is the nature of pain?

      Alex demonstrated that ChatGPT was lying intentionally. Can it lie intentionally for other things? What about the question of consciousness itself? Could we build models that intentionally fail the Turing test? Should we be scared of such a possibility?

      Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.

      • conciselyverbose@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        7
        arrow-down
        2
        ·
        1 month ago

        Alex demonstrated that ChatGPT was lying intentionally

        No, he most certainly did not. LLMs have no agency. “Intentionally” doing anything isn’t possible.

        • UraniumBlazer@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          8
          ·
          1 month ago

          LLMs have no agency.

          Define “agency”. Why do u have agency but an LLM doesn’t?

          “Intentionally” doing anything isn’t possible.

          I see “intention” as a goal in this context. ChatGPT explained that the goal was to make the conversation appear “natural” (which means human like). This was the intention/goal behind it lying to Alex.

          • Zeoic@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            1 month ago

            That “intention” is not made by ChatGPT, though. Their developers intend for conversation with the LLM to appear natural.

            • UraniumBlazer@lemm.eeOP
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              3
              ·
              1 month ago

              ChatGPT says this itself. However, why does an intention have to be made by ChatGPT itself? Our intentions are often trained into us by others. Take the example of propaganda. Political propaganda, corporate propaganda (advertisements) and so on.

              • Zeoic@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                1 month ago

                We have the ability to create our own intentions. Just because we follow others sometimes doesn’t change that.

                Also, if you wrote “I am conscious” on a piece of paper, does that mean the paper is conscious? Does this paper now have the intent to have a natural conversation with you? There is not much difference between that paper and what chatgpt is doing.

                • UraniumBlazer@lemm.eeOP
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  2
                  ·
                  1 month ago

                  The main problem is the definition of what “us” means here. Our brain is a biological machine guided by the laws of physics. We have input parameters (stimuli) and output parameters (behavior).

                  We respond to stimuli. That’s all that we do. So what does “we” even mean? The chemical reactions? The response to stimuli? Even a worm responds to stimuli. So does an amoeba.

                  There sure is complexity in how we respond to stimuli.

                  The main problem here is an absent objective definition of consciousness. We simply don’t know how to define consciousness (yet).

                  This is primarily what leads to questions like u raised right now.

      • Telorand@reddthat.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 month ago

        Questions like these are really interesting. Unfortunately, they are shot down immediately on Lemmy, which is pretty disappointing.

        It’s just because AI stuff is overhyped pretty much everywhere as a panacea to solve all capitalist ails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.

        I do think that grappling with the idea of consciousness is a necessary component of the human experience, and AI is another way for us to continue figuring out what it means to be conscious, self-aware, or a free agent. I also agree that it’s interesting to try to break AI and push it to its limits, but then, breaking software is in my professional interests!

        • Ilandar@aussie.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 month ago

          I do think that grappling with the idea of consciousness is a necessary component of the human experience, and AI is another way for us to continue figuring out what it means to be conscious, self-aware, or a free agent

          You might be interested in the book ‘The Naked Neanderthal’ by Ludovic Slimak. He is an archaeologist but the book is quite philosophical and explores this idea of learning about humanity through the study of other forms of intelligence (Neanderthals). Here are some opening paragraphs from the book to give you an idea of what I mean:

          The interstellar perspective, this suggestion of distant intelligences, reminds us that we humans are alone, orphans, the only living conscious beings capable of analysing the mysteries of the universe that surrounds us. These are countless other forms of animal intelligence, but no consciousness with which we can exchange ideas, compare ourselves, or have a conversation.

          These distant intelligences outside of us perhaps do exist in the immensity of space - the ultimate enigma. And yet we know for certain that they have existed in a time which appears distant to us but in fact is extremely close.

          The real enigma is that these intelligences from the past became progressively extinct over the course of millennia; there was a tipping point in the history of humanity, the last moment when a consciousness external to humanity as we conceive it existed, encountered us, rubbed shoulders with us. This lost otherness still haunts us in our hopes and fears of artificial intelligence, the instrumentalized rebirth of a consciousness that does not belong to us.

        • UraniumBlazer@lemm.eeOP
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          1 month ago

          It’s just because AI stuff is overhyped pretty much everywhere as a panacea to solve all capitalist ails. Seems every other article, no matter the subject or demographic, is about how AI is changing/ruining it.

          Agreed :(

          You know what’s sad? Communities that look at this from a neutral, objective position (while still being fun) exist on Reddit. I really don’t want to keep using it though. But I see nothing like that on Lemmy.

          • Telorand@reddthat.com
            link
            fedilink
            English
            arrow-up
            4
            arrow-down
            1
            ·
            1 month ago

            Lemmy is still in its infancy, and we’re the early adopters. It will come into its own in due time, just like Reddit did.