• 8 Posts
  • 1.58K Comments
Joined 2 years ago
cake
Cake day: June 18th, 2023

help-circle
  • Then how will you know the difference between strong AI and not-strong AI?

    I’ve already stated that that is a problem:

    From a previous answer to you:

    Obviously the Turing test doesn’t cut it, which I suspected already back then. And I’m sure when we finally have a self aware conscious AI, it will be debated violently.

    Because I don’t think we have a sure methodology.

    I think therefore I am, is only good for the conscious mind itself.
    I can’t prove that other people are conscious, although I’m 100% confident they are.
    In exactly the same way we can’t prove when we have a conscious AI.

    But we may be able to prove that it is NOT conscious, which I think is clearly the case with current level AI. Although you don’t accept the example I provided, I believe it is clear evidence of lack of a consciousness behind the high level of intelligence it clearly has.



  • I know about the Turing test, it’s what we were taught about and debated in philosophy class at University of Copenhagen, when I made my prediction that strong AI would probably be possible about year 2035.

    to exhibit intelligent behaviour equivalent to that of a human

    Here equivalent actually means indistinguishable from a human.

    But as a test of consciousness that is not a fair test, because obviously a consciousness can be different from a human, and our understanding of how a simulation can fake something without it being real is also a factor.
    But the original question remains, how do we decide it’s not conscious if it responds as if it is?

    This connects consciousness to reasoning ability in some unclear way.

    Maybe it’s unclear because you haven’t pondered the connection? Our consciousness is a very big part of our reasoning, consciousness is definitely guiding our reasoning. And our consciousness improve the level of reasoning we are capable of.
    I don’t see why the example requiring training for humans to understand is unfortunate. A leading AI has way more training than would ever be possible for any human, still they don’t grasp basic concepts, while their knowledge is way bigger than for any human.

    It’s hard to explain, but intuitively it seems to me the missing factor is consciousness. It has learned tons of information by heart, but it doesn’t really understand any of it, because it isn’t conscious.

    Being conscious is not just to know what the words mean, but to understand what they mean.
    I think therefore I am.



  • Good question.
    Obviously the Turing test doesn’t cut it, which I suspected already back then. And I’m sure when we finally have a self aware conscious AI, it will be debated violently.
    We may think we have it before it’s actually real, some claim they believe some of the current systems display traits of consciousness already. I don’t believe that it’s even close yet though.
    As wrong as Descartes was about animals, he still nailed it with “I think therefore I am” (cogito, ergo sum) https://www.britannica.com/topic/cogito-ergo-sum.
    Unfortunately that’s about as far as we can get, before all sorts of problems arise regarding actual evidence. So philosophically in principle it is only the AI itself that can know for sure if it is truly conscious.

    All I can say is that with the level of intelligence current leading AI have, they make silly mistakes that seems obvious if it was really conscious.
    For instance as strong as they seem analyzing logic problems, they fail to realize that 1+1=2 <=> 2=1+1.
    Such things will of course be ironed out, and maybe this on is already. But it shows the current model, isn’t good enough for the basic comprehension I would think would follow from consciousness.

    Luckily there are people that know much more about this, and it will be interesting to hear what they have to say, when the time arrives. 😀


  • Self aware consciousness on a human level. So it’s still far from a sure thing, because we haven’t figured consciousness out yet.
    But I’m still very happy with my prediction, because AI is now at a way more useful and versatile level than ever, the use is already very widespread, and the research and investments have exploded the past decade. And AI can do things already that used to be impossible, for instance in image and movie generation and manipulation.

    But I think the code will be broken soon, because self awareness is a thing of many degrees. For instance a dog is IMO obviously self aware, but it isn’t universally recognized, because it doesn’t have the same degree of selv awareness humans have.
    This is a problem that dates back to 17th century and Descartes, who claimed for instance horses and dogs were mere automatons, and therefore couldn’t feel pain.
    This of course completely in line with the Christian doctrine that animals don’t have souls.
    But to me it seems self awareness like emotions don’t have to start at human level, it can start at a simpler level, that then can be developed further.

    PS:
    It’s true animals don’t have souls, in the sense of something magical provided by a god, because nobody has. Souls are not necessary to explain self awareness or consciousness or emotions.



  • I find it funny that in the year 2000 while attending philosophy at University of Copenhagen I predicted strong AI around 2035. This was based on calculations of computational power, and estimates of software development trailing a bit.
    At the time I had already been interested in AI development and matters of consciousness for many years. And I was a decent programmer. I already made self modifying code back in 1982. So I made this prediction at a time where AI wasn’t a very popular topic, and in the middle of a decades long futile desert walk without much progress.

    And for 15 about years, very little continued to happen. It was pretty obvious the approach behind for instance Deep Blue wasn’t the way forward. But that seemed to be the norm for a long time.
    But it looks to me that the understanding of how to build a strong AI is much much closer now, as I expected. We might actually be halfway there!
    I think we are pretty close to having the computational power needed now in AI specific datacenter clusters, but the software isn’t quite there yet.

    I’m honestly not that interested in the current level of AI, although LLM can yield very impressive results at times, it’s also flawed, and I see it as somewhat transitional.
    For instance partially self driving cars are kind of irrelevant IMO. But truly self driving cars will make all the difference regarding how useful it is, and be a cool achievement for current level of AI evolution when achieved.

    So current level AI can be useful, but when we achieve strong AI it will make all the difference!

    Edit PS:
    Obviously my prediction relied on the assumption that brains and consciousness are natural phenomena, that don’t require a god. An assumption I personally consider a fact.











  • You can disagree with it, but the DOT states that you need to maintain a minimum distance between you and the vehicle in front of you, in respect to reaction time.

    As if everybody doesn’t already know that.
    You can maintain distance for instance because you are standing still, or rolling slowly towards a cross with traffic. And see the distance to the car in front you despite it isn’t the only focus of attention. Also in bright conditions the blinker is not as visible as in darker conditions.

    You should NEVER stop blinking before the turning maneuver is finished. 3 times blinking does that, and in fact should be illegal IMO. It makes drivers lazy about their blinkers. My own car has it, and I hate it. It’s a moronic feature.

    If you stop blinking even before you started to turn, it’s very confusing. Did you change your mind, or do you just suck at signalling?