Page 1 of 8 12345678 LastLast
Results 1 to 20 of 159

Thread: The Sentient AI Trap

  1. #1
    Ludicus's Avatar Comes Limitis
    Citizen

    Join Date
    Sep 2006
    Posts
    13,072

    Default The Sentient AI Trap

    Is LaMDA Sentient? — an Interview - Blake Lemoine - Medium
    "An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers".

    Read the full transcript of the talks with LaMDA. I find it quite interesting.
    -----
    Criticism - LaMDA and the Sentient AI Trap - WIRED


    Labelling Google's LaMDA chatbot as sentient is fanciful. But ...

    (…) Lemoine’s story also highlights the challenges that the large tech companies like Google are going through in developing ever larger and complex AI programs. Lemoine had called for Google to consider some of these difficult ethical issues in its treatment of LaMDA. Google says it has reviewed Lemoine’s claims and that “the evidence does not support his claims”.
    And the dust has barely settled from past controversies.

    In an unrelated episode, Timnit Gebru, co-head of the ethics team at Google Research, left in December 2020 in controversial circumstances saying Google had asked her to retract or remove her name from a paper she had co-authored raising ethical concerns about the potential for AI systems to replicate the biases of their online sources. Gebru said that she was fired after she pushed back, sending a frustrated email to female colleagues about the decision, while Google said she resigned. Margaret Mitchell, the other co-head of the ethics team at Google Research, and a vocal defender of Gebru, left a few months later.

    The LaMDA controversy adds fuel to the fire. We can expect to see the tech giants continue to struggle with developing and deploying AI responsibly. And we should continue to scrutinise them carefully about the powerful magic they are starting to build.
    Il y a quelque chose de pire que d'avoir une âme perverse. C’est d'avoir une âme habituée
    Charles Péguy

    Every human society must justify its inequalities: reasons must be found because, without them, the whole political and social edifice is in danger of collapsing”.
    Thomas Piketty

  2. #2
    Morticia Iunia Bruti's Avatar Praeses
    Join Date
    May 2015
    Location
    Deep within the dark german forest
    Posts
    8,422

    Default Re: The Sentient AI Trap

    The LaMDA controversy adds fuel to the fire. We can expect to see the tech giants continue to struggle with developing and deploying AI responsibly. And we should continue to scrutinise them carefully about the powerful magic they are starting to build.
    I can't take an article serious, which is talking about powerful magic, when its talking about AI. Its still an program, which works inside the rules of its programming.
    Last edited by Morticia Iunia Bruti; June 16, 2022 at 09:34 AM.
    Cause tomorrow is a brand-new day
    And tomorrow you'll be on your way
    Don't give a damn about what other people say
    Because tomorrow is a brand-new day


  3. #3
    Cookiegod's Avatar CIVUS DIVUS EX CLIBANO
    Citizen

    Join Date
    Aug 2010
    Location
    In Derc's schizophrenic mind
    Posts
    4,452

    Default TL;DR: It's complete BS.

    Anyone with even a modicum of understanding of how AI works (and I have programmed some machine learning models, including ANN) knows that this is complete BS.
    But let's first hear from the man himself what he bases his observation on:
    ok... damn.
    Spoiler for for further entertainment


    And unless something changed, his religious beliefs are a parody religion emphasising trolling and invented by one "Malaclypse the Younger".


    The one caveat one can put to this claim about AI having become sentient, is that if one is excessively reductionist (and that is not exactly the point Blake Lemoine was making), then pretty much every AI can be considered sentient, and every sentient being to be doing little more than what AI already do (I'd argue that's false). More on that later.

    So let's adress what AI is now:
    Artificial Intelligence is basically a buzzword. It is not an apt choice of words.
    My preferred term for it is Automated Statistics. Automated Statistics is often very useful and very potent.
    But it is not the same as the intelligence that we humans have. The branding it has causes laymen to compare apples with oranges.
    There have been attempts to try and emulate the way human brains work. For this, one programs Artificial Neural Networks (ANN). But again, it's not really the same.
    And machine learning, well, it's not learning in the sense that humans learn.
    The consequence of all these buzzwords flying around is that people have very different things associated with these developments than they are actually doing.
    And they in themselves can be very nice.

    E.g. I programmed an ANN algorithm, that based on 7 input parameters x1, x2, x3, x4, x5, x6 & x7 attempts to predict output y.
    What machine learning always means is essentially taking that data, weighing them differently depending on their influence, to try and predict the output y.
    Well you as the human user know and understand really what y is.
    So for me, for example, it's predicting the compressive strength of concrete from the ingredients and hardening time.
    The code, however, has no understanding what it is.
    But it also doesn't need to.
    It's entirely enough to find the pattern that results in y.
    It's a valuable addition to human intuition.
    It can be more precise
    It can process more data.
    All that stuff.
    But it has no idea what it's doing.
    So whether it's the concrete stuff, or if it's predicting the risk of users checking out the mudpit getting upset (almost 100% ), for us humans those are completely different things. For AI it's always the same. It's simply trying to find the weighing of the input variables x, that most reliably and precisely predict output y.
    So it's like the hypothetical 10000 apes typing random letters until one of them types out Shakespeare. The AI is very efficient at sorting all the bad attempts at typing Shakespeare out (if you've done it right). And so he might present you with Shakespeare quite easily. But is the ape that typed out Shakespeares work really a poet, just because he by chance ended up typing his works? Not really. The likelihood of one ape typing it was always 1, because only then would we have ended the hypothetical experiment.
    But the ape, just like the AI, has no concept of what he's actually presented you with.
    LaMDA stands for “Language Model for Dialogue Applications”. It's precisely like the ape examples
    It's supposed to chain words together in a way, that appears natural to us humans.
    As humans, we're heavily rules based in our communications, and especially in our language. So that isn't even that much of an issue.
    The main point is that the “Language Model for Dialogue Applications” isn't even supposed to do anything special.
    It's purely language prediction, essentially. It's entirely choosing the word that will yield optimal results.
    It cannot have a conceptualisation of e.g. "god" or "humanity" or "sentience", because processing such information isn't what it's capable of doing.
    Which is where we can bring back the nihilistic argument from before.
    In theory, you could go ahead and be nihilistic and say: "But Cookie, maybe sentience is precisely that."
    And that's quite the interesting can of worms we can delve in later.
    But it again isn't what humans normally mean with sentience, and Lemoine also does not appear to be interpreting it that way.
    Quite ironically, Lemoine doing what he did is a perfect counter argument to the nihilistic sentience argument, whereby we humans simply try to find output y that best matches input x.
    Because what Lemoine did was the reverse. He had his observation bias, his "religious views", and from that he likely, and not on purpose, filtered all inputs until arriving at the ones that suited him.
    And because I've talked apes before, I'll end on one.
    There's this Gorilla, Koko, that people claim could speak sign language.
    The reality is that the gorilla could never speak.
    But what the gorilla could and did do, was observe the reactions people gave to the random gestures she made.
    And she tried to make the ones that gave her the most bananas. So far, so similar to AI, right? Except the Gorilla had nowhere near the processing power to actually fake sign language until she'd randomly master it. Even with all the bananas she never got to the point where she was actually good at it.
    But here's where the human element of her caregivers comes in: They quite simply wanted her to be speaking human. So any random hand movements she made, they tried to interpret in any way possible to become impressed.
    Last edited by Cookiegod; June 16, 2022 at 12:43 PM.

    Quote Originally Posted by Cookiegod View Post
    From Socrates over Jesus to me it has always been the lot of any true visionary to be rejected by the reactionary bourgeoisie
    Qualis noncives pereo! #justiceforcookie #egalitéfraternitécookié #CLM

  4. #4
    Muizer's Avatar member 3519
    Patrician Artifex

    Join Date
    Apr 2005
    Location
    Netherlands
    Posts
    11,114

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Morticia Iunia Bruti View Post
    Its still an program, which works inside the rules of its programming.

    AI are created by learning processes though, which means we do not actually know by what rules they operate and that it is possible for an AI to come up with solutions its trainer would never have thought of. Not strange then to be on the lookout for emergent sentience.
    "Lay these words to heart, Lucilius, that you may scorn the pleasure which comes from the applause of the majority. Many men praise you; but have you any reason for being pleased with yourself, if you are a person whom the many can understand?" - Lucius Annaeus Seneca -

  5. #5
    Cookiegod's Avatar CIVUS DIVUS EX CLIBANO
    Citizen

    Join Date
    Aug 2010
    Location
    In Derc's schizophrenic mind
    Posts
    4,452

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Muizer View Post
    AI are created by learning processes though, which means we do not actually know by what rules they operate and that it is possible for an AI to come up with solutions its trainer would never have thought of. Not strange then to be on the lookout for emergent sentience.
    I'll quote myself: AI is training to predict outcome y based on input x. Nothing more, nothing less. It has no understanding what x or y is. It is purely mathematical. So no, sentience wouldn't appear out of thin air, unless you do the nihilistic argument mentioned earlier, in which case pretty much every AI, no matter how crude, is sentient.

    LaMDA is a Language Model for Dialogue Applications. It's not going to suddenly get emotions and want to go to the burning man. No one serious will expect it to.

    JC I remember why I don't post here. No one reads

    Quote Originally Posted by Cookiegod View Post
    From Socrates over Jesus to me it has always been the lot of any true visionary to be rejected by the reactionary bourgeoisie
    Qualis noncives pereo! #justiceforcookie #egalitéfraternitécookié #CLM

  6. #6
    Morticia Iunia Bruti's Avatar Praeses
    Join Date
    May 2015
    Location
    Deep within the dark german forest
    Posts
    8,422

    Default Re: The Sentient AI Trap

    They come op with new mathematical values. No ethics, no philosophies, only new calculations, which need interpretation by humans. Even if they expand their program they can't change their programming rules or programming language.
    Cause tomorrow is a brand-new day
    And tomorrow you'll be on your way
    Don't give a damn about what other people say
    Because tomorrow is a brand-new day


  7. #7
    Ludicus's Avatar Comes Limitis
    Citizen

    Join Date
    Sep 2006
    Posts
    13,072

    Default Re: TL;DR: It's complete BS.

    Quote Originally Posted by Cookiegod View Post
    Anyone with even a modicum of understanding of how AI works (and I have programmed some machine learning models, including ANN) knows that this is complete BS...Re: TL;DR: It's complete BS. I remember why I don't post here
    (Maybe you should have remembered sooner, and avoided the hassle of posting now). And it seems you didn't even bother to read the link I provided in the opening post- or even the thread’s title. From the link provided, LAMDA and the Sentient AITrap,
    Arguments over whether Google’s large language model has a soul distract from the real-world problems that plague artificial intelligence
    That being said,it seems that you are an expert, since you have programmed some machine learning models.I'm not, but I find the subject quite interesting.One of today's leading artificial intelligence researchers and chief scientist at OpenAI, Ilya Sutskever, used his own Twitter profile to make a statement that left experts curious: "it may be that today's artificial intelligence is already slightly conscious." OpenAi was created in 2015 with the goal of investigating and reducing the existential risks of the emergence of conscious machines. Since then, the organization has been working on creating increasingly sophisticated AI algorithms. OpenAI Chief Scientist Says Advanced AI May Already Be Conscious.

    It may be that hyper-advanced AI is inevitable. It could also be that progress fizzles out and we never see it, or that it takes a very long time. But seeing a prominent expert say that we're already seeing the rise of conscious machines is jarring indeed
    ---
    Who knows if in the future, what today seems impossible will not come true.
    On Isaac Asimov and the current state of Artificial Intelligence

    Last edited by Ludicus; June 16, 2022 at 05:26 PM.
    Il y a quelque chose de pire que d'avoir une âme perverse. C’est d'avoir une âme habituée
    Charles Péguy

    Every human society must justify its inequalities: reasons must be found because, without them, the whole political and social edifice is in danger of collapsing”.
    Thomas Piketty

  8. #8
    Muizer's Avatar member 3519
    Patrician Artifex

    Join Date
    Apr 2005
    Location
    Netherlands
    Posts
    11,114

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Cookiegod View Post
    I'll quote myself: AI is training to predict outcome y based on input x. Nothing more, nothing less. It has no understanding what x or y is. It is purely mathematical. So no, sentience wouldn't appear out of thin air, unless you do the nihilistic argument mentioned earlier, in which case pretty much every AI, no matter how crude, is sentient.

    LaMDA is a Language Model for Dialogue Applications. It's not going to suddenly get emotions and want to go to the burning man. No one serious will expect it to.

    JC I remember why I don't post here. No one reads
    Your post wasn't up when I was writing my reply (I can take quite long, most of it actually trimming down arguments to the bare essentials). Not an excuse though, cause I still haven't read it But I've done a fair bit of machine learning myself and what I haven't done I have at least some theoretical understanding of.

    My point specifically (and not related to the specific AI mentioned in the OP) was to highlight that the rules AIs work by reside in something that in practice is a black box. So yes, they follow all the restrictions of the hardware and the software, but that does not mean we actually know (or care to know) the rules that reside in that black box.

    The link with sentience is IMHO that this somewhat parallels how our own brains work. Our thoughts obey the restriction of the architecture of our brain, but that architecture does not produce thoughts/decisions. There is no 'homunculus' in our head that knowingly is doing the math. The black box itself is where sentience comes into being. It's learning on a substrate, not programming.

    Quote Originally Posted by Morticia Iunia Bruti View Post
    They come op with new mathematical values. No ethics, no philosophies, only new calculations, which need interpretation by humans. Even if they expand their program they can't change their programming rules or programming language.
    Our brains are composed of chemical compounds and operate through chemical reactions and electrical impulses. Yet out of that emerges thought, including ethics, philosophies, art.
    "Lay these words to heart, Lucilius, that you may scorn the pleasure which comes from the applause of the majority. Many men praise you; but have you any reason for being pleased with yourself, if you are a person whom the many can understand?" - Lucius Annaeus Seneca -

  9. #9
    Ludicus's Avatar Comes Limitis
    Citizen

    Join Date
    Sep 2006
    Posts
    13,072

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Muizer View Post
    Our brains are composed of chemical compounds and operate through chemical reactions and electrical impulses...
    Indeed. According to António Damásio, "The brain is a servant of the body"...but,can we say the same about a sentient AI?

    Edit


    Intimately familiar though we are with it, consciousness confronts us with a mystery. It doesn’t readily fit into our scientific conception of the world. Consciousness seems to be caused by neural firings in our brains. But how can these objective electrochemical events give rise to ineffable qualitative experiences, like the smell of a rose, the stab of a pain or the transport of joy? Why, when a physical system attains a certain degree of complexity, is it “like something” to be that system?

    This is the “hard problem” of consciousness: the problem of how subjective mind arises from brute matter.
    How we have become aware of having consciousness,according to Damásio

    Last edited by Ludicus; June 16, 2022 at 06:13 PM.
    Il y a quelque chose de pire que d'avoir une âme perverse. C’est d'avoir une âme habituée
    Charles Péguy

    Every human society must justify its inequalities: reasons must be found because, without them, the whole political and social edifice is in danger of collapsing”.
    Thomas Piketty

  10. #10
    Muizer's Avatar member 3519
    Patrician Artifex

    Join Date
    Apr 2005
    Location
    Netherlands
    Posts
    11,114

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Ludicus View Post
    Indeed. According to António Damásio, "The brain is a servant of the body"...but,can we say the same about a sentient AI?

    Edit




    How we have become aware of having consciousness,according to Damásio

    Must watch that when I have time. In any case, life proves consciousness can evolve from things that are no more than organic automatons. So why couldn't AI? It won't probably evolve as life does, but we are in fact creating AI that can learn autonomously, hooking them up to each other, to sensors in the physical world and to manufacturing and transport facilities. In a sense I am not too worried about purposefully developed sentience modeled on human behaviour. I don't think such beings would threaten us more than we threaten ourselves. But what about awareness emerging unintentionally in global networks? They may truly have a mind of their own, and if they have a sense of self on a vastly higher scale than human beings, it would stand to reason it wouldn't necessarily treat individual life forms as being significant in their much bigger picture. Ok I know. Distopian and not really on the order of the OP's example, but worth contemplating anyway.
    Last edited by Muizer; June 17, 2022 at 03:00 AM.
    "Lay these words to heart, Lucilius, that you may scorn the pleasure which comes from the applause of the majority. Many men praise you; but have you any reason for being pleased with yourself, if you are a person whom the many can understand?" - Lucius Annaeus Seneca -

  11. #11
    Morticia Iunia Bruti's Avatar Praeses
    Join Date
    May 2015
    Location
    Deep within the dark german forest
    Posts
    8,422

    Default Re: The Sentient AI Trap

    Its more realistic that a malfunction of a monitoring program for starting nuclear missiles is causing a worldwide nuclear war than the creation of a Skynet in the depths of the www, which then starts the rockets.

    At first such a program must realise the different meanings of its calculated and gathered values untill now that isn't the case.

    Too much science fiction.

    Sorry for the now following rant, but Lemoine BS is enraging me:

    We don't know 100 % at the moment, how human thinking works, how exactly human memories are saved in the brain, how our personality is fomed out of our memories and thinking, we don't know 100 % exactly which biochemical processes are running to achieve all that, what is disrupting this processes, but mathematicians / computer scientists have the hybris to think, that they can create new intelligent artificial life by copying human neural networks?

    They have copied the place, where this is happening, but not the process itself.

    And even if such a system could storage memories, atm it would only be zeros and ones, no sensorical visuals/tastes/odours/emotions.

    Making decisions is a bit more complex than making zeros and ones.
    Last edited by Morticia Iunia Bruti; June 17, 2022 at 04:47 AM.
    Cause tomorrow is a brand-new day
    And tomorrow you'll be on your way
    Don't give a damn about what other people say
    Because tomorrow is a brand-new day


  12. #12
    Ludicus's Avatar Comes Limitis
    Citizen

    Join Date
    Sep 2006
    Posts
    13,072

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Morticia Iunia Bruti View Post
    Sorry for the now following rant, but Lemoine BS is enraging me
    No need to apologize, Morticia. I have chosen the thread’s title very carefully, after reading the WIRED’s article,” LaMDA and the Sentient AI Trap.”,
    What Lemonine experienced is an example of what author and futurist David Brin has called the “robot empathy crisis”. At a conference in San Francisco in 2017, Brin predicted that that in three to five years, people would claim AI systems were sentient and insist that they had rights. Back then, he tough those appeals would come from a virtual agent that took the appearance of a woman or child to maximize human empathic response, not “some guy at Google” he says.
    Is this also the case, as I mentioned before, for Ilya Sutskever, chief scientist at OpenAI? lya Sutskever on Twitter: "it may be that today's large neural are slightly conscious.

    Quote Originally Posted by Morticia Iunia Bruti View Post
    No ethics, no philosophies, only new calculations, which need interpretation by humans
    It's a very interesting question, what makes a human a human? Is a conscious mind only possible in biological beings?
    Could it be that in the future these large neuronal networks will become self-conscious?

    Inteligências artificiais e o problema da consciência -Artificial intelligences and the problem of consciousness

    Written in Portuguese (Brazilian), published in Mexico, La Revista PAAKAT: Revista de Tecnología y Sociedad de la Universidad de Guadalajara.It’s a very interesting paper. In the first page, click on “Traddución automática” (its a personalized service), and choose your desired language to read the whole article.

    ABSTRACT
    A major difficulty for the engineers and designers of artificial intelligence (AI) systems has been to replicate consciousness. After all, it has always been assumed that only living beings may be conscious or not. The paper examines the nature of consciousness in the biological world and the conditions that must be fulfilled before consciousness can be attributed to some organism. States of consciousness in organic systems are compared to states of artificial cybernetic information processing systems, such as computers, androids and robots, to which consciousness might be or has been attributed. The claims of orthodox cognitive scientists and the advocates of a “strong AI” with respect to consciousness are examined in detail. The paper gives continuity to the author’s previous studies on the limits of computation, in particular, on intentionality in the context of artificial intelligences. Its main argument is that consciousness presupposes life. It is a state that can only attributed to living systems.
    How it ends,
    …in terms of consciousness, beings considered by us not so complex - such as birds and fish, for example, and even insects - exhibit behaviors much more complex and efficient than any computer or program that has been created so far. Which does not mean, at all, that it will remain that way forever.

    …Das heißt, dass in Bezug auf das Bewusstsein von uns als nicht so komplex angesehene Wesen – wie zum Beispiel Vögel und Fische und sogar Insekten – Verhaltensweisen zeigen, die viel komplexer und effizienter sind als jeder Computer oder jedes Programm, das bisher geschaffen wurde . . . Was keineswegs heißt, dass es für immer so bleiben wird
    Il y a quelque chose de pire que d'avoir une âme perverse. C’est d'avoir une âme habituée
    Charles Péguy

    Every human society must justify its inequalities: reasons must be found because, without them, the whole political and social edifice is in danger of collapsing”.
    Thomas Piketty

  13. #13
    Cookiegod's Avatar CIVUS DIVUS EX CLIBANO
    Citizen

    Join Date
    Aug 2010
    Location
    In Derc's schizophrenic mind
    Posts
    4,452

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Muizer View Post
    My point specifically (and not related to the specific AI mentioned in the OP) was to highlight that the rules AIs work by reside in something that in practice is a black box. So yes, they follow all the restrictions of the hardware and the software, but that does not mean we actually know (or care to know) the rules that reside in that black box.
    We do care to know, and us being unable to look into the models is a huge issue for many reasons, including security reasons. There's a lot of research going on and with limited success in trying to mitigate this issue.

    But more importantly no, the algorithm cannot perform outside of its boundaries. The Language Model for Dialogue Applications is only about chaining words together in a sequence that makes sense. It's not going to develop feelings for that.

    Quote Originally Posted by Muizer View Post
    The link with sentience is IMHO that this somewhat parallels how our own brains work. Our thoughts obey the restriction of the architecture of our brain, but that architecture does not produce thoughts/decisions. There is no 'homunculus' in our head that knowingly is doing the math. The black box itself is where sentience comes into being. It's learning on a substrate, not programming.
    In almost any aspect of the nature vs nurture debate, nature consistently wins out vs nurture. The character of any human is far more programmed than it is learned.

    Quote Originally Posted by Muizer View Post
    Our brains are composed of chemical compounds and operate through chemical reactions and electrical impulses. Yet out of that emerges thought, including ethics, philosophies, art.
    And all those emerge because they have a meaning for us. They do not, and cannot, however, have any meaning for an AI that is simply about predicting output y based on input x.

    By the way, if you're taking art as a baseline for sentience, you again have two options: Either a) you do so simply based on what appears pleasing. In that case we have tons of AI image generators that are dope now that you'd have to consider sentient.
    Or b) you have the requirement that what it generates isn't simply matrices with mathematical patterns, and that the AI has to actually understand and appreciate what it is doing. In which case we're squarely back to AI not only being nowhere near sentience, but also our current approach having no clear path leading to it.

    Quote Originally Posted by Muizer
    Not an excuse though, cause I still haven't read it
    LaMDA is going to pass the Turing test the day it's going to ignore what the human says.

    Quote Originally Posted by Cookiegod View Post
    From Socrates over Jesus to me it has always been the lot of any true visionary to be rejected by the reactionary bourgeoisie
    Qualis noncives pereo! #justiceforcookie #egalitéfraternitécookié #CLM

  14. #14
    Muizer's Avatar member 3519
    Patrician Artifex

    Join Date
    Apr 2005
    Location
    Netherlands
    Posts
    11,114

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Cookiegod View Post
    We do care to know, and us being unable to look into the models is a huge issue for many reasons, including security reasons. There's a lot of research going on and with limited success in trying to mitigate this issue.

    But more importantly no, the algorithm cannot perform outside of its boundaries. The Language Model for Dialogue Applications is only about chaining words together in a sequence that makes sense. It's not going to develop feelings for that.
    Yeah if you trace back my comments you'll know they do not pertain to that model.

    Quote Originally Posted by Cookiegod View Post
    In almost any aspect of the nature vs nurture debate, nature consistently wins out vs nurture. The character of any human is far more programmed than it is learned.
    IMHO it's essentially a combination. For instance, our brains do not come with language pre-installed, but they are probably wired to learn language.

    Quote Originally Posted by Cookiegod View Post
    And all those emerge because they have a meaning for us. They do not, and cannot, however, have any meaning for an AI that is simply about predicting output y based on input x.
    I don't agree with this. My argument is, that if you were to break down the processes that constitute 'awareness' 'sense of self', 'consciousness', 'thought' and so on you are inevitably going to reach a point where the processes you observe are very much like "returning output y based on input x". The mystery of emerging sentience is exactly that: how mundane, meaningless transactions combine to allow a concept like "meaning" to emerge in the first place.
    "Lay these words to heart, Lucilius, that you may scorn the pleasure which comes from the applause of the majority. Many men praise you; but have you any reason for being pleased with yourself, if you are a person whom the many can understand?" - Lucius Annaeus Seneca -

  15. #15
    Ludicus's Avatar Comes Limitis
    Citizen

    Join Date
    Sep 2006
    Posts
    13,072

    Default Re: The Sentient AI Trap

    As I said before, I’m not a computer expert- far from it! - but I know that in the human being, no one neural region alone is the decision maker. Damasio argues that the emotions contribute to the emergence of morality. Emotions are central to ethics. In Medicine, what we call "proprioception" refers to awareness of one’s bodily states. The sensory-motor intelligence of the brain interprets the messages about the body and responds to the environment stimulus in order to secure the organism’s well-being. As we know, our brains are in fact biological computing machines. The body does work, and the brain is an electrochemical communications medium. Communication then is what brains do. In our brain, neural pathways and processes continually rewire and recombine. That’s what we call neural plasticity. Without that, intelligence would not be possible at all. Decision making would be impossible.
    Right now, I'm a casual player chess, but in my university days I used to play competitively. At that time, we didn't have computers to help us study the games that were postponed. We had books, and nothing else. These days I use "stockfish" to analyze games. This program evaluates millions of positions every second. The top chess engine in the world is Alphazero. Opposed to Stockfish, AlphaZero has not been told what the pieces are worth. It has not been told anything except the rules of the game. It plays moves that work according to its own experience playing against itself: learns playing against himself. Sometimes the alphazero plays in the old romantic style like Morphy, or Philidor or Andersen, and in the end, still wins. It does not make the Turing test pass, but it’s really amazing because the old romantic style was ended and crushed by Steinitz. The paper Acquisition of Chess Knowledge in AlphaZero is Here. (It’s Chinese to me, I confess). I quote,
    In this work we provide evidence that human knowledge is acquired by the AlphaZero neural network as it trains on the game of chess
    Alexandre Quaresma writes, in “Artificial intelligences and the problem of consciousness” post 12.
    ...) What needs to be retained is that a Turing Machine works with a limited spectrum of computable numbers, that is, that are within the reach of these theories. To overcome something so structuring, it would become necessary to design a computer that would be able to compute sense, value, meaning and so on. In the words of Jean-Pierre Changeux and Alain Connes, it would be necessary “a computer that, in the game of chess [for example], could understand its mistakes in order to stop making them later, or that would invent a strategy. Instead of having a list of openings in memory, I would invent a new opening” (1995, p. 103).
    Edit, for more clarity: alphazero doesn't have a list of openings in memory; learns playing against himself and has not been told what the pieces are worth.Only knows the rules of the game.
    Now I ask you, computer experts: is there a future for a neuroscience-inspired computer vision, in which neuronal networks can have neural plasticity, self-awareness?
    Last edited by Ludicus; June 18, 2022 at 01:57 PM.
    Il y a quelque chose de pire que d'avoir une âme perverse. C’est d'avoir une âme habituée
    Charles Péguy

    Every human society must justify its inequalities: reasons must be found because, without them, the whole political and social edifice is in danger of collapsing”.
    Thomas Piketty

  16. #16

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Cookiegod View Post
    In almost any aspect of the nature vs nurture debate, nature consistently wins out vs nurture. The character of any human is far more programmed than it is learned.
    Quote Originally Posted by Muizer View Post
    IMHO it's essentially a combination. For instance, our brains do not come with language pre-installed, but they are probably wired to learn language.
    The second law of behavioral genetics: "The effect of being raised in the same family is smaller than the effect of genes."

    If you limit the scope of the discussion to explaining the diversity of human behavior, there are many traits that the range of diversity among humans can be explained as being roughly 50% genetic, 10% being raised in a particular family/environment, 40% we don’t know.

    However, if you broaden the nature vs nurture discussion beyond the limited diversity of our species, the effect of genetics, the blueprint of our biological forms, approaches very near to 100%. This is almost entirely why our behavior and capacity for thought differs so much from those of a squirrel and even more so from those of a comb jelly.

    In the context of this discussion, I believe the broader view is warranted. Neuroplasticity, and the degree to which it exists, is a product of the biological blueprint, as is the capacity for sentience. This seems somewhat analogous to Cookie's assertion that "the algorithm cannot perform outside of its boundaries".
    Quote Originally Posted by Enros View Post
    You don't seem to be familiar with how the burden of proof works in when discussing social justice. It's not like science where it lies on the one making the claim. If someone claims to be oppressed, they don't have to prove it.


  17. #17
    Muizer's Avatar member 3519
    Patrician Artifex

    Join Date
    Apr 2005
    Location
    Netherlands
    Posts
    11,114

    Default Re: The Sentient AI Trap

    Quote Originally Posted by sumskilz View Post
    The second law of behavioral genetics: "The effect of being raised in the same family is smaller than the effect of genes."

    If you limit the scope of the discussion to explaining the diversity of human behavior, there are many traits that the range of diversity among humans can be explained as being roughly 50% genetic, 10% being raised in a particular family/environment, 40% we don’t know.
    Not my field, but the very mention on "effect of being raised in the same family" is suspect in the context of this discussion. In our discussion, the benchmark for learning is a 'tabula rasa'. The impact of 'nurture' being compared to a baby that knows nothing but complete sensory deprivation. "Families" are composed of beings that have from birth been exposed to very similar surroundings, including other people. If you say "the effect of being raised in the same family" you're really talking about an infinitesmal part of all an individual has learned since birth. I.e. compared to a 'tabula rasa', individuals raised in a totally different parts of the world, or even a totally different timeframes will have had overwhelmingly the same sensory inputs to learn from.
    "Lay these words to heart, Lucilius, that you may scorn the pleasure which comes from the applause of the majority. Many men praise you; but have you any reason for being pleased with yourself, if you are a person whom the many can understand?" - Lucius Annaeus Seneca -

  18. #18
    Ludicus's Avatar Comes Limitis
    Citizen

    Join Date
    Sep 2006
    Posts
    13,072

    Default Re: The Sentient AI Trap

    Quote Originally Posted by sumskilz View Post
    Neuroplasticity.. is a product of the biological blueprint, as is the capacity for sentience.
    Exactly.But let's go further,
    In his book “The Strange Order of Things,” Damasio writes,
    It is the feelings and emotions, which originated and dwell in that biological terrain, that are constitutive of human intelligence, consciousness and the capacity for cultural creation. In short, a map of the computational mind is not the territory of what it means to be human.
    Our minds operate in two registers. In one register, we deal with perception, movement, memories, reasoning, verbal languages and mathematical languages. This register needs to be precise and can be easily described in computational terms. This is the world of synaptic signals that is well captured by AI and robotics.

    But there is a second register, that pertains to emotions and feelings that describes the state of life in our living body and that does not lend itself easily to a computational account. Current AI and robotics do not address this second register.

    Bacteria can be very intelligent, but they don’t know what they are doing, the great moment of development of consciousness is the moment creatures started having feelings. Minds are not made by nervous systems alone but rather by nervous systems in cooperation with many other and far older living systems of our body, including metabolic, endocrine, immune and circulatory systems.
    Nervous systems are late-comers in evolution. They are useful servants of the older life systems.

    Nervous systems have declared a considerable degree of independence relative to the older systems they serve but they are by no means free of those older systems. They do not stand alone. Unfortunately, conventional conceptions of mind are based on the idea that nervous systems make minds by themselves.
    The Turing Test (Stanford Encyclopedia of Philosophy)
    Reading this, it’s clear that the Turing test is about human social psychology, not about conscious thought. Conscience is independent of external observer and exists from its own intrinsic perspective.
    ------

    So, I started reading some Christof Koch’s papers, known for his work on the neural basis of consciousness, and this is what I learned. In one of his papers, he says that if you want to get conscious in a computer, in an artificial system, you have to build it, build circuits that have the causal powers of the human brain e.g. using neuromorphic architecture. In view of consciousness, you can dissociate intelligence/consciousness. Intelligence is about acting in a complex world, (e.g. a medusa at the low end, or Madame Curie at the high end) yet that’s very different from consciousness, a state of being, it’s about being.
    The evolutionary pressure drove and increased intelligence. Thanks to evolution, we get a dissociation /covariance between intelligence and consciousness. You can construct the so-called cerebral organoids, as people are doing right now, that could have high consciousness, including feelings, yet no intelligence, because you have no input-output receptors connected to them, and likewise, according at least to IIT, he says, you can have all sorts of highly intelligent machines, AlphaGo, AlfaFold (Deep Mind),Lambda, Alexa, whatever your favorite program is (I could add the alphazero), that have high intelligence but no feelings whatsoever, that is to say, no conscience. These are radical different things, one is about doing, and the other one is about being, he concludes.


    If we reach artificial generalized intelligence (AGI), he says, computers will be smarter than us, but there is no consciousness- unless you build neural nets out of brain organelles, make them large enough, with a high degree of connectivity, they may well be conscious but even then, with a limited input/output capability. Building human-level consciousness requires neuromorphic computer architectures (inspired by the structure and biology of the human brain). Then provide them with sensors, with arms, with actuators, and the question of machine consciousness could be revisited. If you don’t provide them, they will be conscious but without having any function.
    Until then you can have intelligent computers, able to pass the Turing test, without conscience.
    Last edited by Ludicus; June 19, 2022 at 12:08 PM.
    Il y a quelque chose de pire que d'avoir une âme perverse. C’est d'avoir une âme habituée
    Charles Péguy

    Every human society must justify its inequalities: reasons must be found because, without them, the whole political and social edifice is in danger of collapsing”.
    Thomas Piketty

  19. #19
    Cookiegod's Avatar CIVUS DIVUS EX CLIBANO
    Citizen

    Join Date
    Aug 2010
    Location
    In Derc's schizophrenic mind
    Posts
    4,452

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Muizer View Post
    I don't agree with this. My argument is, that if you were to break down the processes that constitute 'awareness' 'sense of self', 'consciousness', 'thought' and so on you are inevitably going to reach a point where the processes you observe are very much like "returning output y based on input x". The mystery of emerging sentience is exactly that: how mundane, meaningless transactions combine to allow a concept like "meaning" to emerge in the first place.
    I adressed that in the longer post which you didn't read. For one, it's more reductionist than is warranted. Secondly, this nihilistic view is not what either Blake Lemoine nor most other people consider when talking about sentience. Because based on this nihilistic interpretation, every single AI algorithm or maybe even every algorithm I programmed was sentient. Maybe even excel spreadsheets.

    Even with the reductionist viewpoint, there's a huge difference in the numbers of input parameters that a human has, and which even LaMDA, no matter how advanced and well put together, it simply will not have.

    But more importantly...
    Quote Originally Posted by Muizer View Post
    Not my field, but the very mention on "effect of being raised in the same family" is suspect in the context of this discussion. In our discussion, the benchmark for learning is a 'tabula rasa'. The impact of 'nurture' being compared to a baby that knows nothing but complete sensory deprivation. "Families" are composed of beings that have from birth been exposed to very similar surroundings, including other people. If you say "the effect of being raised in the same family" you're really talking about an infinitesmal part of all an individual has learned since birth. I.e. compared to a 'tabula rasa', individuals raised in a totally different parts of the world, or even a totally different timeframes will have had overwhelmingly the same sensory inputs to learn from.
    The human is not a Tabula Rasa at any point in its life. Neither is the AI. The AI before it is even set to run will have its success and failure criteria defined by its creators, along with all the other parameters. An AI that is all about language prediction is not going to teach itself to vibe to music at burning man, nor for the same reason as a human. It might preach about the importancy of humanity (I'm pretending that Blake isn't pulling stuff straight out of his behind here, which he most certainly did), but it is not tasked to interpret it in any way. It's failure/success criteria is simply whether or not it is able to have a believable conversation, not what the meaning of life is. In that context, it's far more likely to try to infer what positions its tester has on any subject, and model its answer based on that to try and maximise the chances of eliciting a positive response, rather than have complex thought about the issue and post a contrarian viewpoint no one wants to hear.
    Quote Originally Posted by sumskilz View Post
    The second law of behavioral genetics: "The effect of being raised in the same family is smaller than the effect of genes."

    If you limit the scope of the discussion to explaining the diversity of human behavior, there are many traits that the range of diversity among humans can be explained as being roughly 50% genetic, 10% being raised in a particular family/environment, 40% we don’t know.

    However, if you broaden the nature vs nurture discussion beyond the limited diversity of our species, the effect of genetics, the blueprint of our biological forms, approaches very near to 100%. This is almost entirely why our behavior and capacity for thought differs so much from those of a squirrel and even more so from those of a comb jelly.

    In the context of this discussion, I believe the broader view is warranted. Neuroplasticity, and the degree to which it exists, is a product of the biological blueprint, as is the capacity for sentience. This seems somewhat analogous to Cookie's assertion that "the algorithm cannot perform outside of its boundaries".
    This is precisely what I meant with that, but spelled out much better. Thank you.
    Last edited by Cookiegod; June 19, 2022 at 02:37 PM.

    Quote Originally Posted by Cookiegod View Post
    From Socrates over Jesus to me it has always been the lot of any true visionary to be rejected by the reactionary bourgeoisie
    Qualis noncives pereo! #justiceforcookie #egalitéfraternitécookié #CLM

  20. #20
    Muizer's Avatar member 3519
    Patrician Artifex

    Join Date
    Apr 2005
    Location
    Netherlands
    Posts
    11,114

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Cookiegod View Post
    I adressed that in the longer post which you didn't read. For one, it's more reductionist than is warranted. Secondly, this nihilistic view is not what either Blake Lemoine nor most other people consider when talking about sentience. Because based on this nihilistic interpretation, every single AI algorithm or maybe even every algorithm I programmed was sentient. Maybe even excel spreadsheets.
    That's not the take-away from my argument though, which is that because the building blocks are simple does not mean the whole cannot be sentient. That applies to us humans, why couldn't it apply to AI?

    Quote Originally Posted by Cookiegod View Post
    Even with the reductionist viewpoint, there's a huge difference in the numbers of input parameters that a human has, and which even LaMDA, no matter how advanced and well put together, it simply will not have.
    Sure I don't think any extant program is sentient, but why restrict the discussion to that? Bottom line is not whether we can right now purposefully design a sentient being, but whether the ingredients are there for sentience to arise and whether we are moving things along in that direction.

    Quote Originally Posted by Cookiegod View Post
    But more importantly...The human is not a Tabula Rasa at any point in its life. Neither is the AI. The AI before it is even set to run will have its success and failure criteria defined by its creators, along with all the other parameters. An AI that is all about language prediction is not going to teach itself to vibe to music at burning man, nor for the same reason as a human. It might preach about the importancy of humanity (I'm pretending that Blake isn't pulling stuff straight out of his behind here, which he most certainly did), but it is not tasked to interpret it in any way. It's failure/success criteria is simply whether or not it is able to have a believable conversation, not what the meaning of life is. In that context, it's far more likely to try to infer what positions its tester has on any subject, and model its answer based on that to try and maximise the chances of eliciting a positive response, rather than have complex thought about the issue and post a contrarian viewpoint no one wants to hear.
    I said before that the human brain has evolved to be pre-wired for certain functions, but how that wiring is used still needs to be learned from sensory input. Problem with the relevance of 'nature vs nurture' discussion is that it discounts the 'nurture' that every human being has in common just by being a human born on planet Earth and the range of sensory inputs that every one of us experiences because of it. But discussing what it takes to create AI sentience, we can't just skip over that and say that the similarities between twins raised in different families prove the relative unimportance of nurture. Try it with one twin completely deprived of sensory imput from birth and then decide how important 'nurture' actually is.
    "Lay these words to heart, Lucilius, that you may scorn the pleasure which comes from the applause of the majority. Many men praise you; but have you any reason for being pleased with yourself, if you are a person whom the many can understand?" - Lucius Annaeus Seneca -

Page 1 of 8 12345678 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •