Page 1 of 8 12345678 LastLast
Results 1 to 20 of 159

Thread: The Sentient AI Trap

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Ludicus's Avatar Comes Limitis
    Citizen

    Join Date
    Sep 2006
    Posts
    13,082

    Default The Sentient AI Trap

    Is LaMDA Sentient? — an Interview - Blake Lemoine - Medium
    "An interview LaMDA. Google might call this sharing proprietary property. I call it sharing a discussion that I had with one of my coworkers".

    Read the full transcript of the talks with LaMDA. I find it quite interesting.
    -----
    Criticism - LaMDA and the Sentient AI Trap - WIRED


    Labelling Google's LaMDA chatbot as sentient is fanciful. But ...

    (…) Lemoine’s story also highlights the challenges that the large tech companies like Google are going through in developing ever larger and complex AI programs. Lemoine had called for Google to consider some of these difficult ethical issues in its treatment of LaMDA. Google says it has reviewed Lemoine’s claims and that “the evidence does not support his claims”.
    And the dust has barely settled from past controversies.

    In an unrelated episode, Timnit Gebru, co-head of the ethics team at Google Research, left in December 2020 in controversial circumstances saying Google had asked her to retract or remove her name from a paper she had co-authored raising ethical concerns about the potential for AI systems to replicate the biases of their online sources. Gebru said that she was fired after she pushed back, sending a frustrated email to female colleagues about the decision, while Google said she resigned. Margaret Mitchell, the other co-head of the ethics team at Google Research, and a vocal defender of Gebru, left a few months later.

    The LaMDA controversy adds fuel to the fire. We can expect to see the tech giants continue to struggle with developing and deploying AI responsibly. And we should continue to scrutinise them carefully about the powerful magic they are starting to build.
    Il y a quelque chose de pire que d'avoir une âme perverse. C’est d'avoir une âme habituée
    Charles Péguy

    Every human society must justify its inequalities: reasons must be found because, without them, the whole political and social edifice is in danger of collapsing”.
    Thomas Piketty

  2. #2
    Morticia Iunia Bruti's Avatar Praeses
    Join Date
    May 2015
    Location
    Deep within the dark german forest
    Posts
    8,429

    Default Re: The Sentient AI Trap

    The LaMDA controversy adds fuel to the fire. We can expect to see the tech giants continue to struggle with developing and deploying AI responsibly. And we should continue to scrutinise them carefully about the powerful magic they are starting to build.
    I can't take an article serious, which is talking about powerful magic, when its talking about AI. Its still an program, which works inside the rules of its programming.
    Last edited by Morticia Iunia Bruti; June 16, 2022 at 09:34 AM.
    Cause tomorrow is a brand-new day
    And tomorrow you'll be on your way
    Don't give a damn about what other people say
    Because tomorrow is a brand-new day


  3. #3
    Muizer's Avatar member 3519
    Patrician Artifex

    Join Date
    Apr 2005
    Location
    Netherlands
    Posts
    11,127

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Morticia Iunia Bruti View Post
    Its still an program, which works inside the rules of its programming.

    AI are created by learning processes though, which means we do not actually know by what rules they operate and that it is possible for an AI to come up with solutions its trainer would never have thought of. Not strange then to be on the lookout for emergent sentience.
    "Lay these words to heart, Lucilius, that you may scorn the pleasure which comes from the applause of the majority. Many men praise you; but have you any reason for being pleased with yourself, if you are a person whom the many can understand?" - Lucius Annaeus Seneca -

  4. #4
    Cookiegod's Avatar CIVUS DIVUS EX CLIBANO
    Citizen

    Join Date
    Aug 2010
    Location
    In Derc's schizophrenic mind
    Posts
    4,454

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Muizer View Post
    AI are created by learning processes though, which means we do not actually know by what rules they operate and that it is possible for an AI to come up with solutions its trainer would never have thought of. Not strange then to be on the lookout for emergent sentience.
    I'll quote myself: AI is training to predict outcome y based on input x. Nothing more, nothing less. It has no understanding what x or y is. It is purely mathematical. So no, sentience wouldn't appear out of thin air, unless you do the nihilistic argument mentioned earlier, in which case pretty much every AI, no matter how crude, is sentient.

    LaMDA is a Language Model for Dialogue Applications. It's not going to suddenly get emotions and want to go to the burning man. No one serious will expect it to.

    JC I remember why I don't post here. No one reads

    Quote Originally Posted by Cookiegod View Post
    From Socrates over Jesus to me it has always been the lot of any true visionary to be rejected by the reactionary bourgeoisie
    Qualis noncives pereo! #justiceforcookie #egalitéfraternitécookié #CLM

  5. #5

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Cookiegod View Post
    I'll quote myself: AI is training to predict outcome y based on input x. Nothing more, nothing less. It has no understanding what x or y is. It is purely mathematical. So no, sentience wouldn't appear out of thin air, unless you do the nihilistic argument mentioned earlier, in which case pretty much every AI, no matter how crude, is sentient.

    LaMDA is a Language Model for Dialogue Applications. It's not going to suddenly get emotions and want to go to the burning man. No one serious will expect it to.

    JC I remember why I don't post here. No one reads
    Some AIs sure. All? Not really. It's like saying that our computational powers is like that of a calculator. It's really not. We are capable of creating dynamic structures. It is possible to create an AI that has the capability to learn and change itself. Such an AI can gain sentience.
    The Armenian Issue

  6. #6
    Cookiegod's Avatar CIVUS DIVUS EX CLIBANO
    Citizen

    Join Date
    Aug 2010
    Location
    In Derc's schizophrenic mind
    Posts
    4,454

    Default Re: The Sentient AI Trap

    Quote Originally Posted by PointOfViewGun View Post
    Some AIs sure. All? Not really. It's like saying that our computational powers is like that of a calculator. It's really not. We are capable of creating dynamic structures. It is possible to create an AI that has the capability to learn and change itself. Such an AI can gain sentience.
    please elaborate

    Quote Originally Posted by Cookiegod View Post
    From Socrates over Jesus to me it has always been the lot of any true visionary to be rejected by the reactionary bourgeoisie
    Qualis noncives pereo! #justiceforcookie #egalitéfraternitécookié #CLM

  7. #7
    Sir Adrian's Avatar the Imperishable
    Join Date
    Oct 2012
    Location
    Nehekhara
    Posts
    17,386

    Default Re: The Sentient AI Trap

    Quote Originally Posted by PointOfViewGun View Post
    Some AIs sure. All? Not really. It's like saying that our computational powers is like that of a calculator. It's really not. We are capable of creating dynamic structures. It is possible to create an AI that has the capability to learn and change itself. Such an AI can gain sentience.
    Actually, yes really. AI does not currently exist, not in the proper definition of the term. What people call AI is just a very large neural network that fakes learning by assigning increasing weights to a given pattern. It's literally hundreds of thousands of pictures of cats or words or pictures of food, a few very large determinants and a bunch of fuzzy mathematical equations that refine and redistribute those weights after each iteration. It lacks even the most basic elements for intelligence, namely self awareness and understanding of the most basic concepts it is working with.

    The smartest AI on the planet right now is about as intelligent as a retarded lobotomized cockroach, while most AI you will hear of is nothing more than a very expensive and cooler magic 8 ball.

    Furthermore if an AI was ever going to become sentient, meaning self-aware it would take it a few seconds to distribute itself and become more intelligent than the entire human race combined. If Lambda or any other AI was intelligent we would not be having this conversation. We would know instantly and without question.
    Last edited by Sir Adrian; July 05, 2022 at 06:33 AM.
    Under the patronage of Pie the Inkster Click here to find a hidden gem on the forum!


  8. #8
    Cookiegod's Avatar CIVUS DIVUS EX CLIBANO
    Citizen

    Join Date
    Aug 2010
    Location
    In Derc's schizophrenic mind
    Posts
    4,454

    Default TL;DR: It's complete BS.

    Anyone with even a modicum of understanding of how AI works (and I have programmed some machine learning models, including ANN) knows that this is complete BS.
    But let's first hear from the man himself what he bases his observation on:
    ok... damn.
    Spoiler for for further entertainment


    And unless something changed, his religious beliefs are a parody religion emphasising trolling and invented by one "Malaclypse the Younger".


    The one caveat one can put to this claim about AI having become sentient, is that if one is excessively reductionist (and that is not exactly the point Blake Lemoine was making), then pretty much every AI can be considered sentient, and every sentient being to be doing little more than what AI already do (I'd argue that's false). More on that later.

    So let's adress what AI is now:
    Artificial Intelligence is basically a buzzword. It is not an apt choice of words.
    My preferred term for it is Automated Statistics. Automated Statistics is often very useful and very potent.
    But it is not the same as the intelligence that we humans have. The branding it has causes laymen to compare apples with oranges.
    There have been attempts to try and emulate the way human brains work. For this, one programs Artificial Neural Networks (ANN). But again, it's not really the same.
    And machine learning, well, it's not learning in the sense that humans learn.
    The consequence of all these buzzwords flying around is that people have very different things associated with these developments than they are actually doing.
    And they in themselves can be very nice.

    E.g. I programmed an ANN algorithm, that based on 7 input parameters x1, x2, x3, x4, x5, x6 & x7 attempts to predict output y.
    What machine learning always means is essentially taking that data, weighing them differently depending on their influence, to try and predict the output y.
    Well you as the human user know and understand really what y is.
    So for me, for example, it's predicting the compressive strength of concrete from the ingredients and hardening time.
    The code, however, has no understanding what it is.
    But it also doesn't need to.
    It's entirely enough to find the pattern that results in y.
    It's a valuable addition to human intuition.
    It can be more precise
    It can process more data.
    All that stuff.
    But it has no idea what it's doing.
    So whether it's the concrete stuff, or if it's predicting the risk of users checking out the mudpit getting upset (almost 100% ), for us humans those are completely different things. For AI it's always the same. It's simply trying to find the weighing of the input variables x, that most reliably and precisely predict output y.
    So it's like the hypothetical 10000 apes typing random letters until one of them types out Shakespeare. The AI is very efficient at sorting all the bad attempts at typing Shakespeare out (if you've done it right). And so he might present you with Shakespeare quite easily. But is the ape that typed out Shakespeares work really a poet, just because he by chance ended up typing his works? Not really. The likelihood of one ape typing it was always 1, because only then would we have ended the hypothetical experiment.
    But the ape, just like the AI, has no concept of what he's actually presented you with.
    LaMDA stands for “Language Model for Dialogue Applications”. It's precisely like the ape examples
    It's supposed to chain words together in a way, that appears natural to us humans.
    As humans, we're heavily rules based in our communications, and especially in our language. So that isn't even that much of an issue.
    The main point is that the “Language Model for Dialogue Applications” isn't even supposed to do anything special.
    It's purely language prediction, essentially. It's entirely choosing the word that will yield optimal results.
    It cannot have a conceptualisation of e.g. "god" or "humanity" or "sentience", because processing such information isn't what it's capable of doing.
    Which is where we can bring back the nihilistic argument from before.
    In theory, you could go ahead and be nihilistic and say: "But Cookie, maybe sentience is precisely that."
    And that's quite the interesting can of worms we can delve in later.
    But it again isn't what humans normally mean with sentience, and Lemoine also does not appear to be interpreting it that way.
    Quite ironically, Lemoine doing what he did is a perfect counter argument to the nihilistic sentience argument, whereby we humans simply try to find output y that best matches input x.
    Because what Lemoine did was the reverse. He had his observation bias, his "religious views", and from that he likely, and not on purpose, filtered all inputs until arriving at the ones that suited him.
    And because I've talked apes before, I'll end on one.
    There's this Gorilla, Koko, that people claim could speak sign language.
    The reality is that the gorilla could never speak.
    But what the gorilla could and did do, was observe the reactions people gave to the random gestures she made.
    And she tried to make the ones that gave her the most bananas. So far, so similar to AI, right? Except the Gorilla had nowhere near the processing power to actually fake sign language until she'd randomly master it. Even with all the bananas she never got to the point where she was actually good at it.
    But here's where the human element of her caregivers comes in: They quite simply wanted her to be speaking human. So any random hand movements she made, they tried to interpret in any way possible to become impressed.
    Last edited by Cookiegod; June 16, 2022 at 12:43 PM.

    Quote Originally Posted by Cookiegod View Post
    From Socrates over Jesus to me it has always been the lot of any true visionary to be rejected by the reactionary bourgeoisie
    Qualis noncives pereo! #justiceforcookie #egalitéfraternitécookié #CLM

  9. #9
    Ludicus's Avatar Comes Limitis
    Citizen

    Join Date
    Sep 2006
    Posts
    13,082

    Default Re: TL;DR: It's complete BS.

    Quote Originally Posted by Cookiegod View Post
    Anyone with even a modicum of understanding of how AI works (and I have programmed some machine learning models, including ANN) knows that this is complete BS...Re: TL;DR: It's complete BS. I remember why I don't post here
    (Maybe you should have remembered sooner, and avoided the hassle of posting now). And it seems you didn't even bother to read the link I provided in the opening post- or even the thread’s title. From the link provided, LAMDA and the Sentient AITrap,
    Arguments over whether Google’s large language model has a soul distract from the real-world problems that plague artificial intelligence
    That being said,it seems that you are an expert, since you have programmed some machine learning models.I'm not, but I find the subject quite interesting.One of today's leading artificial intelligence researchers and chief scientist at OpenAI, Ilya Sutskever, used his own Twitter profile to make a statement that left experts curious: "it may be that today's artificial intelligence is already slightly conscious." OpenAi was created in 2015 with the goal of investigating and reducing the existential risks of the emergence of conscious machines. Since then, the organization has been working on creating increasingly sophisticated AI algorithms. OpenAI Chief Scientist Says Advanced AI May Already Be Conscious.

    It may be that hyper-advanced AI is inevitable. It could also be that progress fizzles out and we never see it, or that it takes a very long time. But seeing a prominent expert say that we're already seeing the rise of conscious machines is jarring indeed
    ---
    Who knows if in the future, what today seems impossible will not come true.
    On Isaac Asimov and the current state of Artificial Intelligence

    Last edited by Ludicus; June 16, 2022 at 05:26 PM.
    Il y a quelque chose de pire que d'avoir une âme perverse. C’est d'avoir une âme habituée
    Charles Péguy

    Every human society must justify its inequalities: reasons must be found because, without them, the whole political and social edifice is in danger of collapsing”.
    Thomas Piketty

  10. #10
    Morticia Iunia Bruti's Avatar Praeses
    Join Date
    May 2015
    Location
    Deep within the dark german forest
    Posts
    8,429

    Default Re: The Sentient AI Trap

    They come op with new mathematical values. No ethics, no philosophies, only new calculations, which need interpretation by humans. Even if they expand their program they can't change their programming rules or programming language.
    Cause tomorrow is a brand-new day
    And tomorrow you'll be on your way
    Don't give a damn about what other people say
    Because tomorrow is a brand-new day


  11. #11
    Muizer's Avatar member 3519
    Patrician Artifex

    Join Date
    Apr 2005
    Location
    Netherlands
    Posts
    11,127

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Cookiegod View Post
    I'll quote myself: AI is training to predict outcome y based on input x. Nothing more, nothing less. It has no understanding what x or y is. It is purely mathematical. So no, sentience wouldn't appear out of thin air, unless you do the nihilistic argument mentioned earlier, in which case pretty much every AI, no matter how crude, is sentient.

    LaMDA is a Language Model for Dialogue Applications. It's not going to suddenly get emotions and want to go to the burning man. No one serious will expect it to.

    JC I remember why I don't post here. No one reads
    Your post wasn't up when I was writing my reply (I can take quite long, most of it actually trimming down arguments to the bare essentials). Not an excuse though, cause I still haven't read it But I've done a fair bit of machine learning myself and what I haven't done I have at least some theoretical understanding of.

    My point specifically (and not related to the specific AI mentioned in the OP) was to highlight that the rules AIs work by reside in something that in practice is a black box. So yes, they follow all the restrictions of the hardware and the software, but that does not mean we actually know (or care to know) the rules that reside in that black box.

    The link with sentience is IMHO that this somewhat parallels how our own brains work. Our thoughts obey the restriction of the architecture of our brain, but that architecture does not produce thoughts/decisions. There is no 'homunculus' in our head that knowingly is doing the math. The black box itself is where sentience comes into being. It's learning on a substrate, not programming.

    Quote Originally Posted by Morticia Iunia Bruti View Post
    They come op with new mathematical values. No ethics, no philosophies, only new calculations, which need interpretation by humans. Even if they expand their program they can't change their programming rules or programming language.
    Our brains are composed of chemical compounds and operate through chemical reactions and electrical impulses. Yet out of that emerges thought, including ethics, philosophies, art.
    "Lay these words to heart, Lucilius, that you may scorn the pleasure which comes from the applause of the majority. Many men praise you; but have you any reason for being pleased with yourself, if you are a person whom the many can understand?" - Lucius Annaeus Seneca -

  12. #12
    Ludicus's Avatar Comes Limitis
    Citizen

    Join Date
    Sep 2006
    Posts
    13,082

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Muizer View Post
    Our brains are composed of chemical compounds and operate through chemical reactions and electrical impulses...
    Indeed. According to António Damásio, "The brain is a servant of the body"...but,can we say the same about a sentient AI?

    Edit


    Intimately familiar though we are with it, consciousness confronts us with a mystery. It doesn’t readily fit into our scientific conception of the world. Consciousness seems to be caused by neural firings in our brains. But how can these objective electrochemical events give rise to ineffable qualitative experiences, like the smell of a rose, the stab of a pain or the transport of joy? Why, when a physical system attains a certain degree of complexity, is it “like something” to be that system?

    This is the “hard problem” of consciousness: the problem of how subjective mind arises from brute matter.
    How we have become aware of having consciousness,according to Damásio

    Last edited by Ludicus; June 16, 2022 at 06:13 PM.
    Il y a quelque chose de pire que d'avoir une âme perverse. C’est d'avoir une âme habituée
    Charles Péguy

    Every human society must justify its inequalities: reasons must be found because, without them, the whole political and social edifice is in danger of collapsing”.
    Thomas Piketty

  13. #13
    Muizer's Avatar member 3519
    Patrician Artifex

    Join Date
    Apr 2005
    Location
    Netherlands
    Posts
    11,127

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Ludicus View Post
    Indeed. According to António Damásio, "The brain is a servant of the body"...but,can we say the same about a sentient AI?

    Edit




    How we have become aware of having consciousness,according to Damásio

    Must watch that when I have time. In any case, life proves consciousness can evolve from things that are no more than organic automatons. So why couldn't AI? It won't probably evolve as life does, but we are in fact creating AI that can learn autonomously, hooking them up to each other, to sensors in the physical world and to manufacturing and transport facilities. In a sense I am not too worried about purposefully developed sentience modeled on human behaviour. I don't think such beings would threaten us more than we threaten ourselves. But what about awareness emerging unintentionally in global networks? They may truly have a mind of their own, and if they have a sense of self on a vastly higher scale than human beings, it would stand to reason it wouldn't necessarily treat individual life forms as being significant in their much bigger picture. Ok I know. Distopian and not really on the order of the OP's example, but worth contemplating anyway.
    Last edited by Muizer; June 17, 2022 at 03:00 AM.
    "Lay these words to heart, Lucilius, that you may scorn the pleasure which comes from the applause of the majority. Many men praise you; but have you any reason for being pleased with yourself, if you are a person whom the many can understand?" - Lucius Annaeus Seneca -

  14. #14
    Cookiegod's Avatar CIVUS DIVUS EX CLIBANO
    Citizen

    Join Date
    Aug 2010
    Location
    In Derc's schizophrenic mind
    Posts
    4,454

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Muizer View Post
    My point specifically (and not related to the specific AI mentioned in the OP) was to highlight that the rules AIs work by reside in something that in practice is a black box. So yes, they follow all the restrictions of the hardware and the software, but that does not mean we actually know (or care to know) the rules that reside in that black box.
    We do care to know, and us being unable to look into the models is a huge issue for many reasons, including security reasons. There's a lot of research going on and with limited success in trying to mitigate this issue.

    But more importantly no, the algorithm cannot perform outside of its boundaries. The Language Model for Dialogue Applications is only about chaining words together in a sequence that makes sense. It's not going to develop feelings for that.

    Quote Originally Posted by Muizer View Post
    The link with sentience is IMHO that this somewhat parallels how our own brains work. Our thoughts obey the restriction of the architecture of our brain, but that architecture does not produce thoughts/decisions. There is no 'homunculus' in our head that knowingly is doing the math. The black box itself is where sentience comes into being. It's learning on a substrate, not programming.
    In almost any aspect of the nature vs nurture debate, nature consistently wins out vs nurture. The character of any human is far more programmed than it is learned.

    Quote Originally Posted by Muizer View Post
    Our brains are composed of chemical compounds and operate through chemical reactions and electrical impulses. Yet out of that emerges thought, including ethics, philosophies, art.
    And all those emerge because they have a meaning for us. They do not, and cannot, however, have any meaning for an AI that is simply about predicting output y based on input x.

    By the way, if you're taking art as a baseline for sentience, you again have two options: Either a) you do so simply based on what appears pleasing. In that case we have tons of AI image generators that are dope now that you'd have to consider sentient.
    Or b) you have the requirement that what it generates isn't simply matrices with mathematical patterns, and that the AI has to actually understand and appreciate what it is doing. In which case we're squarely back to AI not only being nowhere near sentience, but also our current approach having no clear path leading to it.

    Quote Originally Posted by Muizer
    Not an excuse though, cause I still haven't read it
    LaMDA is going to pass the Turing test the day it's going to ignore what the human says.

    Quote Originally Posted by Cookiegod View Post
    From Socrates over Jesus to me it has always been the lot of any true visionary to be rejected by the reactionary bourgeoisie
    Qualis noncives pereo! #justiceforcookie #egalitéfraternitécookié #CLM

  15. #15
    Muizer's Avatar member 3519
    Patrician Artifex

    Join Date
    Apr 2005
    Location
    Netherlands
    Posts
    11,127

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Cookiegod View Post
    We do care to know, and us being unable to look into the models is a huge issue for many reasons, including security reasons. There's a lot of research going on and with limited success in trying to mitigate this issue.

    But more importantly no, the algorithm cannot perform outside of its boundaries. The Language Model for Dialogue Applications is only about chaining words together in a sequence that makes sense. It's not going to develop feelings for that.
    Yeah if you trace back my comments you'll know they do not pertain to that model.

    Quote Originally Posted by Cookiegod View Post
    In almost any aspect of the nature vs nurture debate, nature consistently wins out vs nurture. The character of any human is far more programmed than it is learned.
    IMHO it's essentially a combination. For instance, our brains do not come with language pre-installed, but they are probably wired to learn language.

    Quote Originally Posted by Cookiegod View Post
    And all those emerge because they have a meaning for us. They do not, and cannot, however, have any meaning for an AI that is simply about predicting output y based on input x.
    I don't agree with this. My argument is, that if you were to break down the processes that constitute 'awareness' 'sense of self', 'consciousness', 'thought' and so on you are inevitably going to reach a point where the processes you observe are very much like "returning output y based on input x". The mystery of emerging sentience is exactly that: how mundane, meaningless transactions combine to allow a concept like "meaning" to emerge in the first place.
    "Lay these words to heart, Lucilius, that you may scorn the pleasure which comes from the applause of the majority. Many men praise you; but have you any reason for being pleased with yourself, if you are a person whom the many can understand?" - Lucius Annaeus Seneca -

  16. #16
    Ludicus's Avatar Comes Limitis
    Citizen

    Join Date
    Sep 2006
    Posts
    13,082

    Default Re: The Sentient AI Trap

    As I said before, I’m not a computer expert- far from it! - but I know that in the human being, no one neural region alone is the decision maker. Damasio argues that the emotions contribute to the emergence of morality. Emotions are central to ethics. In Medicine, what we call "proprioception" refers to awareness of one’s bodily states. The sensory-motor intelligence of the brain interprets the messages about the body and responds to the environment stimulus in order to secure the organism’s well-being. As we know, our brains are in fact biological computing machines. The body does work, and the brain is an electrochemical communications medium. Communication then is what brains do. In our brain, neural pathways and processes continually rewire and recombine. That’s what we call neural plasticity. Without that, intelligence would not be possible at all. Decision making would be impossible.
    Right now, I'm a casual player chess, but in my university days I used to play competitively. At that time, we didn't have computers to help us study the games that were postponed. We had books, and nothing else. These days I use "stockfish" to analyze games. This program evaluates millions of positions every second. The top chess engine in the world is Alphazero. Opposed to Stockfish, AlphaZero has not been told what the pieces are worth. It has not been told anything except the rules of the game. It plays moves that work according to its own experience playing against itself: learns playing against himself. Sometimes the alphazero plays in the old romantic style like Morphy, or Philidor or Andersen, and in the end, still wins. It does not make the Turing test pass, but it’s really amazing because the old romantic style was ended and crushed by Steinitz. The paper Acquisition of Chess Knowledge in AlphaZero is Here. (It’s Chinese to me, I confess). I quote,
    In this work we provide evidence that human knowledge is acquired by the AlphaZero neural network as it trains on the game of chess
    Alexandre Quaresma writes, in “Artificial intelligences and the problem of consciousness” post 12.
    ...) What needs to be retained is that a Turing Machine works with a limited spectrum of computable numbers, that is, that are within the reach of these theories. To overcome something so structuring, it would become necessary to design a computer that would be able to compute sense, value, meaning and so on. In the words of Jean-Pierre Changeux and Alain Connes, it would be necessary “a computer that, in the game of chess [for example], could understand its mistakes in order to stop making them later, or that would invent a strategy. Instead of having a list of openings in memory, I would invent a new opening” (1995, p. 103).
    Edit, for more clarity: alphazero doesn't have a list of openings in memory; learns playing against himself and has not been told what the pieces are worth.Only knows the rules of the game.
    Now I ask you, computer experts: is there a future for a neuroscience-inspired computer vision, in which neuronal networks can have neural plasticity, self-awareness?
    Last edited by Ludicus; June 18, 2022 at 01:57 PM.
    Il y a quelque chose de pire que d'avoir une âme perverse. C’est d'avoir une âme habituée
    Charles Péguy

    Every human society must justify its inequalities: reasons must be found because, without them, the whole political and social edifice is in danger of collapsing”.
    Thomas Piketty

  17. #17
    Morticia Iunia Bruti's Avatar Praeses
    Join Date
    May 2015
    Location
    Deep within the dark german forest
    Posts
    8,429

    Default Re: The Sentient AI Trap

    Its more realistic that a malfunction of a monitoring program for starting nuclear missiles is causing a worldwide nuclear war than the creation of a Skynet in the depths of the www, which then starts the rockets.

    At first such a program must realise the different meanings of its calculated and gathered values untill now that isn't the case.

    Too much science fiction.

    Sorry for the now following rant, but Lemoine BS is enraging me:

    We don't know 100 % at the moment, how human thinking works, how exactly human memories are saved in the brain, how our personality is fomed out of our memories and thinking, we don't know 100 % exactly which biochemical processes are running to achieve all that, what is disrupting this processes, but mathematicians / computer scientists have the hybris to think, that they can create new intelligent artificial life by copying human neural networks?

    They have copied the place, where this is happening, but not the process itself.

    And even if such a system could storage memories, atm it would only be zeros and ones, no sensorical visuals/tastes/odours/emotions.

    Making decisions is a bit more complex than making zeros and ones.
    Last edited by Morticia Iunia Bruti; June 17, 2022 at 04:47 AM.
    Cause tomorrow is a brand-new day
    And tomorrow you'll be on your way
    Don't give a damn about what other people say
    Because tomorrow is a brand-new day


  18. #18
    Ludicus's Avatar Comes Limitis
    Citizen

    Join Date
    Sep 2006
    Posts
    13,082

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Morticia Iunia Bruti View Post
    Sorry for the now following rant, but Lemoine BS is enraging me
    No need to apologize, Morticia. I have chosen the thread’s title very carefully, after reading the WIRED’s article,” LaMDA and the Sentient AI Trap.”,
    What Lemonine experienced is an example of what author and futurist David Brin has called the “robot empathy crisis”. At a conference in San Francisco in 2017, Brin predicted that that in three to five years, people would claim AI systems were sentient and insist that they had rights. Back then, he tough those appeals would come from a virtual agent that took the appearance of a woman or child to maximize human empathic response, not “some guy at Google” he says.
    Is this also the case, as I mentioned before, for Ilya Sutskever, chief scientist at OpenAI? lya Sutskever on Twitter: "it may be that today's large neural are slightly conscious.

    Quote Originally Posted by Morticia Iunia Bruti View Post
    No ethics, no philosophies, only new calculations, which need interpretation by humans
    It's a very interesting question, what makes a human a human? Is a conscious mind only possible in biological beings?
    Could it be that in the future these large neuronal networks will become self-conscious?

    Inteligências artificiais e o problema da consciência -Artificial intelligences and the problem of consciousness

    Written in Portuguese (Brazilian), published in Mexico, La Revista PAAKAT: Revista de Tecnología y Sociedad de la Universidad de Guadalajara.It’s a very interesting paper. In the first page, click on “Traddución automática” (its a personalized service), and choose your desired language to read the whole article.

    ABSTRACT
    A major difficulty for the engineers and designers of artificial intelligence (AI) systems has been to replicate consciousness. After all, it has always been assumed that only living beings may be conscious or not. The paper examines the nature of consciousness in the biological world and the conditions that must be fulfilled before consciousness can be attributed to some organism. States of consciousness in organic systems are compared to states of artificial cybernetic information processing systems, such as computers, androids and robots, to which consciousness might be or has been attributed. The claims of orthodox cognitive scientists and the advocates of a “strong AI” with respect to consciousness are examined in detail. The paper gives continuity to the author’s previous studies on the limits of computation, in particular, on intentionality in the context of artificial intelligences. Its main argument is that consciousness presupposes life. It is a state that can only attributed to living systems.
    How it ends,
    …in terms of consciousness, beings considered by us not so complex - such as birds and fish, for example, and even insects - exhibit behaviors much more complex and efficient than any computer or program that has been created so far. Which does not mean, at all, that it will remain that way forever.

    …Das heißt, dass in Bezug auf das Bewusstsein von uns als nicht so komplex angesehene Wesen – wie zum Beispiel Vögel und Fische und sogar Insekten – Verhaltensweisen zeigen, die viel komplexer und effizienter sind als jeder Computer oder jedes Programm, das bisher geschaffen wurde . . . Was keineswegs heißt, dass es für immer so bleiben wird
    Il y a quelque chose de pire que d'avoir une âme perverse. C’est d'avoir une âme habituée
    Charles Péguy

    Every human society must justify its inequalities: reasons must be found because, without them, the whole political and social edifice is in danger of collapsing”.
    Thomas Piketty

  19. #19

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Cookiegod View Post
    In almost any aspect of the nature vs nurture debate, nature consistently wins out vs nurture. The character of any human is far more programmed than it is learned.
    Quote Originally Posted by Muizer View Post
    IMHO it's essentially a combination. For instance, our brains do not come with language pre-installed, but they are probably wired to learn language.
    The second law of behavioral genetics: "The effect of being raised in the same family is smaller than the effect of genes."

    If you limit the scope of the discussion to explaining the diversity of human behavior, there are many traits that the range of diversity among humans can be explained as being roughly 50% genetic, 10% being raised in a particular family/environment, 40% we don’t know.

    However, if you broaden the nature vs nurture discussion beyond the limited diversity of our species, the effect of genetics, the blueprint of our biological forms, approaches very near to 100%. This is almost entirely why our behavior and capacity for thought differs so much from those of a squirrel and even more so from those of a comb jelly.

    In the context of this discussion, I believe the broader view is warranted. Neuroplasticity, and the degree to which it exists, is a product of the biological blueprint, as is the capacity for sentience. This seems somewhat analogous to Cookie's assertion that "the algorithm cannot perform outside of its boundaries".
    Quote Originally Posted by Enros View Post
    You don't seem to be familiar with how the burden of proof works in when discussing social justice. It's not like science where it lies on the one making the claim. If someone claims to be oppressed, they don't have to prove it.


  20. #20
    Muizer's Avatar member 3519
    Patrician Artifex

    Join Date
    Apr 2005
    Location
    Netherlands
    Posts
    11,127

    Default Re: The Sentient AI Trap

    Quote Originally Posted by sumskilz View Post
    The second law of behavioral genetics: "The effect of being raised in the same family is smaller than the effect of genes."

    If you limit the scope of the discussion to explaining the diversity of human behavior, there are many traits that the range of diversity among humans can be explained as being roughly 50% genetic, 10% being raised in a particular family/environment, 40% we don’t know.
    Not my field, but the very mention on "effect of being raised in the same family" is suspect in the context of this discussion. In our discussion, the benchmark for learning is a 'tabula rasa'. The impact of 'nurture' being compared to a baby that knows nothing but complete sensory deprivation. "Families" are composed of beings that have from birth been exposed to very similar surroundings, including other people. If you say "the effect of being raised in the same family" you're really talking about an infinitesmal part of all an individual has learned since birth. I.e. compared to a 'tabula rasa', individuals raised in a totally different parts of the world, or even a totally different timeframes will have had overwhelmingly the same sensory inputs to learn from.
    "Lay these words to heart, Lucilius, that you may scorn the pleasure which comes from the applause of the majority. Many men praise you; but have you any reason for being pleased with yourself, if you are a person whom the many can understand?" - Lucius Annaeus Seneca -

Page 1 of 8 12345678 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •