• General_Effort@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    9 hours ago

    I find it funny that in the year 2000 while attending philosophy at University of Copenhagen I predicted strong AI around 2035.

    That seems to be aging well. But what is the definition of “strong AI”?

    • Buffalox@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      6 hours ago

      Self aware consciousness on a human level. So it’s still far from a sure thing, because we haven’t figured consciousness out yet.
      But I’m still very happy with my prediction, because AI is now at a way more useful and versatile level than ever, the use is already very widespread, and the research and investments have exploded the past decade. And AI can do things already that used to be impossible, for instance in image and movie generation and manipulation.

      But I think the code will be broken soon, because self awareness is a thing of many degrees. For instance a dog is IMO obviously self aware, but it isn’t universally recognized, because it doesn’t have the same degree of selv awareness humans have.
      This is a problem that dates back to 17th century and Descartes, who claimed for instance horses and dogs were mere automatons, and therefore couldn’t feel pain.
      This of course completely in line with the Christian doctrine that animals don’t have souls.
      But to me it seems self awareness like emotions don’t have to start at human level, it can start at a simpler level, that then can be developed further.

      PS:
      It’s true animals don’t have souls, in the sense of something magical provided by a god, because nobody has. Souls are not necessary to explain self awareness or consciousness or emotions.

        • Buffalox@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          4 hours ago

          To understand what “I think therefore I am” means, is a very high level of consciousness.
          At lower levels things get more complicated to explain.

        • Buffalox@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 hours ago

          Good question.
          Obviously the Turing test doesn’t cut it, which I suspected already back then. And I’m sure when we finally have a self aware conscious AI, it will be debated violently.
          We may think we have it before it’s actually real, some claim they believe some of the current systems display traits of consciousness already. I don’t believe that it’s even close yet though.
          As wrong as Descartes was about animals, he still nailed it with “I think therefore I am” (cogito, ergo sum) https://www.britannica.com/topic/cogito-ergo-sum.
          Unfortunately that’s about as far as we can get, before all sorts of problems arise regarding actual evidence. So philosophically in principle it is only the AI itself that can know for sure if it is truly conscious.

          All I can say is that with the level of intelligence current leading AI have, they make silly mistakes that seems obvious if it was really conscious.
          For instance as strong as they seem analyzing logic problems, they fail to realize that 1+1=2 <=> 2=1+1.
          Such things will of course be ironed out, and maybe this on is already. But it shows the current model, isn’t good enough for the basic comprehension I would think would follow from consciousness.

          Luckily there are people that know much more about this, and it will be interesting to hear what they have to say, when the time arrives. 😀