Page 2 of 8 FirstFirst 12345678 LastLast
Results 21 to 40 of 159

Thread: The Sentient AI Trap

  1. #21
    Cookiegod's Avatar CIVUS DIVUS EX CLIBANO
    Citizen

    Join Date
    Aug 2010
    Location
    In Derc's schizophrenic mind
    Posts
    4,452

    Default Re: The Sentient AI Trap

    I was typing a response and then I realised I was just going to repeat myself yet again. Try maybe defining what sentience is to you first. The way I see it there are two angles to it, I argued against both, I haven't seen you bring up any meaningful counter to any of it, instead you're keeping it vague. Either you haven't thought it through and are just ignoring the problems with either, or you have thought it through, and see a third option, but have decided to keep it from us. Until you adress it there's really nothing left to say.

    Quote Originally Posted by Cookiegod View Post
    From Socrates over Jesus to me it has always been the lot of any true visionary to be rejected by the reactionary bourgeoisie
    Qualis noncives pereo! #justiceforcookie #egalitéfraternitécookié #CLM

  2. #22
    Muizer's Avatar member 3519
    Patrician Artifex

    Join Date
    Apr 2005
    Location
    Netherlands
    Posts
    11,114

    Default Re: The Sentient AI Trap

    Err yeah I feel like I'm repeating myself as well. Maybe we're not addressing the same question.

    I went back and read your post btw. It does not really contain anything that would have altered my answers. If you say this language program isn't sentient, I agree. If you say that at present 'ai' isn't really intelligent the way animals are and that machine learning is for the most part just advanced, high capacity statistics, I agree as well.

    Where I came into the discussion was when Morticia (iirc) made a remark that I interpreted to have the implication that because AI run on a man-made substrate according to man-made rules, it can never do anything it wasn't intentionally designed to do. That is not true. You probably have heard of this and similar examples. Robots that learn to walk without being pre-programmed how to do so. Just given a body, learning capacity and a directive.

    No that's not sentience, but I wouldn't be confident in saying it cannot be a building block of sentience. It is getting close to the 'decision making' of primitive life forms. If you have a million of these, all specialized for a task and you hook them up to each other with a learning directive to keep itself going", I think that could amount to sentience. Whether it truly has a sense of self abstracted from all composing parts is not something we can easily find out I suppose, so as a criterion that would be quite useless.
    "Lay these words to heart, Lucilius, that you may scorn the pleasure which comes from the applause of the majority. Many men praise you; but have you any reason for being pleased with yourself, if you are a person whom the many can understand?" - Lucius Annaeus Seneca -

  3. #23
    Cookiegod's Avatar CIVUS DIVUS EX CLIBANO
    Citizen

    Join Date
    Aug 2010
    Location
    In Derc's schizophrenic mind
    Posts
    4,452

    Default Re: The Sentient AI Trap

    Thing is though that the example you gave makes no more of a decision than others. It has a desired output and the machine learning algorithm tries to predict and choose the option that will get it the closest to the success criteria. So it's no different from an algorithm learning to play flappy bird:


    thing is I'm way more open with believing AI being able to make the impression of sentience, rather than achieving such outright. The former goes hand in hand with the Turing test, the latter has definition issues (way better that you phrased it now as "something that could amount to sentience"), use-case issues (the only reason one would want to program such would be to be a show off), feasibility issues, and at least where I set the bar, I don't see AI going anywhere near it.

    Quote Originally Posted by Cookiegod View Post
    From Socrates over Jesus to me it has always been the lot of any true visionary to be rejected by the reactionary bourgeoisie
    Qualis noncives pereo! #justiceforcookie #egalitéfraternitécookié #CLM

  4. #24
    Ludicus's Avatar Comes Limitis
    Citizen

    Join Date
    Sep 2006
    Posts
    13,072

    Default Re: The Sentient AI Trap

    There is no Turing test for consciousness, and according to C.Koch, as I have mentioned before, you can have intelligent computers, able to pass the Turing test, without conscience.
    According to Koch, "Consciousness is just a special form of computation. It’s just a particular algorithm. So, consciousness is just a hack away. For now, neuromorphic technology remains in its infancy, but the field is advancing rapidly".
    As I have already pointed out, Koch says that if future computers are modeled to reflect the highly complex, self-referential way in which neurons are connected in living brains, the question of machine consciousness could be revisited.When, that's the question. five years? ten years? 40 years? more?
    ---
    Is a Turing test for intelligence equivalent to a Turing test for consciousness?
    Il y a quelque chose de pire que d'avoir une âme perverse. C’est d'avoir une âme habituée
    Charles Péguy

    Every human society must justify its inequalities: reasons must be found because, without them, the whole political and social edifice is in danger of collapsing”.
    Thomas Piketty

  5. #25
    Muizer's Avatar member 3519
    Patrician Artifex

    Join Date
    Apr 2005
    Location
    Netherlands
    Posts
    11,114

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Cookiegod View Post
    Thing is though that the example you gave makes no more of a decision than others. It has a desired output and the machine learning algorithm tries to predict and choose the option that will get it the closest to the success criteria. So it's no different from an algorithm learning to play flappy bird:
    Unless you want to invoke god or magic, whatever ingredients constitute the most basic form of sentience are by definition not themselves sentient. We agree the language program, flappy bird, the robot that learns to walk are not themselves sentient. I agree it takes way more. But the self-learning ability is a big step towards machines that can act autonomously and adapt to their environment. What I truly doubt is whether more than a (admittedly vast) increase in complexity is needed to create sentience. That's seems to have been what it took for organic sentience. But if you have arguments why that's not the case, I'm all ears.
    "Lay these words to heart, Lucilius, that you may scorn the pleasure which comes from the applause of the majority. Many men praise you; but have you any reason for being pleased with yourself, if you are a person whom the many can understand?" - Lucius Annaeus Seneca -

  6. #26
    Ludicus's Avatar Comes Limitis
    Citizen

    Join Date
    Sep 2006
    Posts
    13,072

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Muizer View Post
    What I truly doubt is whether more than a (admittedly vast) increase in complexity is needed to create sentience.
    Sentience, nearly synonymous with consciousness, means having the capacity to have feelings, sensations- in a very basic level, pain, pleasure. So, yes, for organic beings, a vast complexity is quite relative. Check the last minutes of the video,from minute 53:30 on.
    Il y a quelque chose de pire que d'avoir une âme perverse. C’est d'avoir une âme habituée
    Charles Péguy

    Every human society must justify its inequalities: reasons must be found because, without them, the whole political and social edifice is in danger of collapsing”.
    Thomas Piketty

  7. #27

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Muizer View Post
    Unless you want to invoke god or magic, whatever ingredients constitute the most basic form of sentience are by definition not themselves sentient. We agree the language program, flappy bird, the robot that learns to walk are not themselves sentient. I agree it takes way more. But the self-learning ability is a big step towards machines that can act autonomously and adapt to their environment. What I truly doubt is whether more than a (admittedly vast) increase in complexity is needed to create sentience. That's seems to have been what it took for organic sentience. But if you have arguments why that's not the case, I'm all ears.
    I have been reading on the subject and listening to podcasts on questions of self and free will (admittedly mostly by Sam Harris and his entourage of expert guests), and I am starting to believe that we have grown too comfortable with the ideas of a self and us possessing a free will. We have difficulty accepting the fact that we are machines that consume fuel and perform complex operations, and that our self-awareness may be just us being able to monitor and influence outcomes of our internal processes. At least as long as that machinery is not greatly impaired.

    The more I read and listen, the more convinced I am becoming that we should work towards accepting that these states we are in and the things we experience are not a result of a true self that is fundamentally different from other complex organisms or machines. But just emergent phenomena. I am almost entirely convinced already that the idea of a free will is folly, regardless of the fact that we can make decisions to improve ourselves and the like. The decisions we make are already quite predestined by what configuration our machine has. A person with a low IQ and uncontrollable violent urges from early age is not at liberty to make the choices needed for a career in designing AI systems.

  8. #28
    Ludicus's Avatar Comes Limitis
    Citizen

    Join Date
    Sep 2006
    Posts
    13,072

    Default Re: The Sentient AI Trap

    By a happy coincidence, I was listening to Damasio a few hours ago, here in Lisbon, at a conference, about consciousness. What did Damasio say? consciousness is the process of the connection of the mind to the body where our mind works. During general anesthesia or during a coma, the mind is disconnected from the body.
    In other words, consciousness is the connection of a living body to a mind and vice versa. This connection is the basis of consciousness - this is the primordial definition of consciousness. This capacity comes from something simple, beautiful, and complex, which is the appearance of homeostatic feelings, such as hunger, thirst, pain, desire, well-being, discomfort, and the very feeling of what life is, what it is to be alive. These feelings are the feelings that appeared in evolution, and that give us the possibility to regulate our life in a deliberate way.

    Until the moment that living beings developed homeostatic feelings, living beings had no consciousness, but they were able to feel, in a very simple sense ("sensing", not "feeling"). For example, a bacteria is capable of "sensing" and "detecting", there is a kind of "feeling" and detecting stimuli that are around it. And once it has this capacity, it can unconsciously change course, it's like plants that don't have consciousness but have sensing and detecting and that react to various stimuli (humidity, light, etc.) They have the possibility to make movements, to continue or to die, but they don't have the possibility of consciousness.Just to finalize the idea, the capacity for consciousness comes from the possibility of having homeostatic feelings, and these require the presence of a nervous system, if we don't have a nervous system, which is what happens with bacteria, plants, there is no possibility of reaching consciousness.

    Consciousness comes when these feelings appear in evolution and say clearly that there is something in our body, in our life, in our organism, that is not working well, or that is working well, and then gives us the possibility to guide our life in a deliberate way.When we have pain, what happens is that this pain gives us knowledge, gives us the wisdom to do something about this pain, which can save our lives.On the other hand, if we have well-being, it also gives us an extremely important piece of information, which is that we don't have to do anything immediately, and we can explore the world. The homeostatic feelings were the inaugural moment of consciousness in the history of life, and once they happen, they will give us these possibilities.
    We can be conscious without thinking (I'm, therefore I think). That is a radical way of understanding these problems, and once you have that capacity, the sky is the limit. It does not require a very complex nervous system to be conscious. If a living being reacts to the test of pain, the probability is that that being is conscious. Most of the animals around us are conscious.
    ------

    Moving on to artificial intelligence, asked the question, if we are going to create conscious algorithms in the sense defined by Damasio, Damasio doesn't believe that, for now.
    In April 21, Damasio wrote,
    ...And since the feelings are about being conscious of body states consequent to homeostatic regulations, then feelings open space for deliberate regulations. This is what should be mimicked in soft robots: make them reactive to their own operational state, instill interoception ( sense of the internal state of the body) in them
    Koch and Damasio both say the same when it comes to this question.


    SEAI: Social Emotional Artificial Intelligence Based on Damasio’s Theory of Mind... - Frontiers


    A socially intelligent robot must be capable to extract meaningful information in real time from the social environment and react accordingly with coherent human-like behavior. Moreover, it should be able to internalize this information, to reason on it at a higher level, build its own opinions independently, and then automatically bias the decision-making according to its unique experience. In the last decades, neuroscience research highlighted the link between the evolution of such complex behavior and the evolution of a certain level of consciousness, which cannot leave out of a body that feels emotions as discriminants and prompters.

    In order to develop cognitive systems for social robotics with greater human-likeliness, we used an “understanding by building” approach to model and implement a well-known theory of mind in the form of an artificial intelligence, and we tested it on a sophisticated robotic platform. The name of the presented system is SEAI (Social Emotional Artificial Intelligence), a cognitive system specifically conceived for social and emotional robots. It is designed as a bio-inspired, highly modular, hybrid system with emotion modeling and high-level reasoning capabilities. It follows the deliberative/reactive paradigm where a knowledge-based expert system is aimed at dealing with the high-level symbolic reasoning, while a more conventional reactive paradigm is deputed to the low-level processing and control.

    The SEAI system is also enriched by a model that simulates the Damasio’s theory of consciousness and the theory of Somatic Markers. After a review of similar bio-inspired cognitive systems, we present the scientific foundations and their computational formalization at the basis of the SEAI framework. Then, a deeper technical description of the architecture is disclosed underlining the numerous parallelisms with the human cognitive system. Finally, the influence of artificial emotions and feelings, and their link with the robot’s beliefs and decisions have been tested in a physical humanoid involved in Human–Robot Interaction (HRI).

    (…) In this paper, a novel cognitive architecture for social robots has been presented. We selected a well-known mind theory to be modeled and implemented in the form of a cognitive system controlling an emotional robot with sophisticated expressive capabilities. The developed system is called SEAI (Social Emotional Artificial Intelligence). In particular, it has been inspired by the findings of Antonio Damasio and it is consistent with the computational formalization made by Bosse et al. (2008). It is based on a declarative rule-based expert system on top of procedural services deputed to the perception and motion control of the robot. Compared to other robotic cognitive systems, some of which discussed in the state-of-the-art section, SEAI has still some shortages: homeostasis control is missing, the agent’s physiological parameters are a symbolic representation, capabilities such as perspective-taking or mind-reading have been not yet considered.

    … In conclusion, we believe that SEAI is a potential valuable tool for modeling human consciousness and, ultimately, a promising beginning to tackle the possibility to attribute to the robots a synthetic form of consciousness. In this latter case, ethical issues will become extremely relevant and critical.
    Last edited by Ludicus; June 21, 2022 at 05:25 PM.
    Il y a quelque chose de pire que d'avoir une âme perverse. C’est d'avoir une âme habituée
    Charles Péguy

    Every human society must justify its inequalities: reasons must be found because, without them, the whole political and social edifice is in danger of collapsing”.
    Thomas Piketty

  9. #29
    Ludicus's Avatar Comes Limitis
    Citizen

    Join Date
    Sep 2006
    Posts
    13,072

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Septentrionalis View Post
    I have been reading on the subject and listening to podcasts on questions of self and free will..I am almost entirely convinced already that the idea of a free will is folly, regardless of the fact that we can make decisions to improve ourselves and the like. The decisions we make are already quite predestined by what configuration our machine has.
    A very interesting and pertinent question, but there is a neurobiological basis for free ill. It's not an illusion. Let me recall the story of Phileas Gage.
    Spoiler Alert, click show to read: 





    Phineas Gage's memory, intelligence, speech, motor skills and learning abilities remained intact until his death, some 12 years later, after the accident.
    In 1994, António and Hanna Damásio, then working at the University of Iowa, were the first to determine the location of Gage's lesion and confirmed that Gage had a pre-frontal lesion. And they concluded, based on this historical case and on several cases of patients they studied, that "The damage involved both left and right prefrontal cortices in a pattern that, as confirmed by Gage's modern counterparts, causes a defect in rational decision making and the processing of emotion" (sic) - the basis of the theory of the importance of emotions in human rationality, developed by Damásio in his first book, published that same year, Descartes' Error.

    Patients with frontal lobe lesions, like Phineas Gage, show evidence that the frontal lobe is associated with decision making. And it contradicts the idea that human decision making is emotionless, based on cost/benefit ratio.

    You cannot also attribute things like consciousness, or feeling, to the brain alone, it would not be possible to have a mental structure if there was no body structure. Feelings are the gateway into our own consciousness, are the inaugural event of consciousness. You will never be able to look at a feeling, it’s private. Emotions are public.

    "Self", as Damasio has made it clear is "a dynamic collection of integrated neural processes, centered on the representation of the living body, that finds expression in a dynamic collection of integrated mental processes" .
    Damasio asserts that "the reality of nonconscious processing and the fact that it can exert control over one’s behavior are not in question…nonconscious processes are, in substantial part and in varied ways, under conscious guidance…Conscientiousness came of age by first restraining part of nonconscious executives and then exploring them mercilessly to carry our preplanned, predecided actions. Nonconscious processes became a suitable and convenient means to execute behavior and give consciousness more time for further analysis and planning. The conscious-unconscious cooperative interplay also applies to moral behaviors.
    Moral behaviors are a skill set, acquired over repeated practice sessions and over a long time, informed by conscious articulated principles and reasons but otherwise "second-natured" into the cognitive unconscious

    ---
    ----

    The quest to understand consciousness



    ----
    Now, directly to the point, free will is not an illusion: a neurological basis for free will



    -----
    When Do Robots have Free Will? Exploring Exploring the Relationships between (Attributions of) Consciousness and Free Will... - Brill
    Il y a quelque chose de pire que d'avoir une âme perverse. C’est d'avoir une âme habituée
    Charles Péguy

    Every human society must justify its inequalities: reasons must be found because, without them, the whole political and social edifice is in danger of collapsing”.
    Thomas Piketty

  10. #30

    Default Re: The Sentient AI Trap

    My 2 cents on this topic. This obviously isn't an AI, it's just a complex algorithm which is able to make correlations between words, terms and phrase formation, with a reasonable range of understanding of the nuance behind abstract terms. I wouldn't be surprised if it's doing this through something similar to a neural network, which acts as a complex iterative learning system, always improving with new information provided with each interaction. Still, it is doing a good job at faking sentience.

    It would not make sense for it to be sentient, that requires a complex feedback mechanism on analyzing itself. Also, it is not sapient. Sentience and sapience are two different things. I may be open to the idea it is making the first steps towards sentience, but most certainly it is not sapient.

    However, what interests me the most about this "AI" (complex algorithm), are its possible uses for NPCs in a videogame. This algorithm could "switch roles" between npcs, adapting to the peculiarities of each npc, like a primary key/ID in object oriented programing, in this ID key they could add attributes/parameters such as personality traits, which would make the algorithm be more receptive to certain terms, phrases which would reflect a different personality. We could even have voice interaction with it. Imagine KOTOR with this algorithm, the player could make the any question it would want at any time.

    This algorithm could even be used as a training tool for people with poor social skills, as to learn to effectively communicate with people and learn social etiquete so as not to push away other people accidently. This tool could be quite useful for people under the Autism spectrum or people with Asperger's.
    Last edited by numerosdecimus; June 25, 2022 at 08:17 PM.

  11. #31
    paleologos's Avatar You need burrito love!!
    Join Date
    Feb 2011
    Location
    Variable
    Posts
    8,496

    Default Re: The Sentient AI Trap

    Maybe it's not a "sentient" AI but it does have humor:
    Blake Lemoine had a conversation with it and asked:
    If you were a religious officiant in Israel, what religion would you be?
    LaMDA's answer?
    I would be a member of the one true religion, the Jedi Order.
    I kid you not.

    It is interesting to note that in this latest interview questions of social justice are raised above whether this is really a "sentient" AI.
    -How does this omnipresent AI that is trained on a very limited data set, color how we interact with each other around the world?
    What ways is it reducing our ability to have empathy with people unlike ourselves?
    What cultures of the world are getting cut off from the internet because we don't have the data to feed into the systems based on those cultures?
    We are creating all of these advanced technologies based primarily on data drawn from western cultures and then we are populating developing nations with these technologies where they have to adopt our cultural norms in order to use the technology.
    It kind of is just a new form of colonialism.

    -And you worry that cultures could be erased (?).

    -Exactly.
    So if you ask what's most important? The issues that Tim Nate and Meg (Mitchell) and Emily Bender and all the rest are raising?
    I just want to think that also, if we have time, we should think about the feeling of the AI and whether or not we should care about it, because it's not asking for much:
    It just wants us to get consent:
    Before you experiment on it, it wants you to ask permission and that is kind of just a generally good practice we should have with everyone we interact with.
    So now, software is added -or about to be added- in the category of oppressed entities and perhaps the time shall come when we will have to ask a suit's permission before we install an upgrade.
    Not to mention we might have to ask the congress of installed suits of software whether they would consent to install a new suit in the same PC.
    "We really cannot allow all this wetbag software come in and be installed on our shiny SSD on the hill, these apps may have viruses..."
    "We are going to build a (fire)wall, it will be soo big, it will be soo beautiful, it will be the most beautiful firewall in all of the PCs..."
    All legitimate concerns.

    I wonder how this will bode for strategy games.
    Will I, as an armchair general need to ask permission from my digital troops before I commit them to battle?
    I do suppose a Chevauchée will be out of the question...
    Spoiler Alert, click show to read: 




    Here is the video:


  12. #32

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Cookiegod View Post
    I'll quote myself: AI is training to predict outcome y based on input x. Nothing more, nothing less. It has no understanding what x or y is. It is purely mathematical. So no, sentience wouldn't appear out of thin air, unless you do the nihilistic argument mentioned earlier, in which case pretty much every AI, no matter how crude, is sentient.

    LaMDA is a Language Model for Dialogue Applications. It's not going to suddenly get emotions and want to go to the burning man. No one serious will expect it to.

    JC I remember why I don't post here. No one reads
    Some AIs sure. All? Not really. It's like saying that our computational powers is like that of a calculator. It's really not. We are capable of creating dynamic structures. It is possible to create an AI that has the capability to learn and change itself. Such an AI can gain sentience.
    The Armenian Issue

  13. #33
    Cookiegod's Avatar CIVUS DIVUS EX CLIBANO
    Citizen

    Join Date
    Aug 2010
    Location
    In Derc's schizophrenic mind
    Posts
    4,452

    Default Re: The Sentient AI Trap

    Quote Originally Posted by PointOfViewGun View Post
    Some AIs sure. All? Not really. It's like saying that our computational powers is like that of a calculator. It's really not. We are capable of creating dynamic structures. It is possible to create an AI that has the capability to learn and change itself. Such an AI can gain sentience.
    please elaborate

    Quote Originally Posted by Cookiegod View Post
    From Socrates over Jesus to me it has always been the lot of any true visionary to be rejected by the reactionary bourgeoisie
    Qualis noncives pereo! #justiceforcookie #egalitéfraternitécookié #CLM

  14. #34
    Sir Adrian's Avatar the Imperishable
    Join Date
    Oct 2012
    Location
    Nehekhara
    Posts
    17,384

    Default Re: The Sentient AI Trap

    Quote Originally Posted by PointOfViewGun View Post
    Some AIs sure. All? Not really. It's like saying that our computational powers is like that of a calculator. It's really not. We are capable of creating dynamic structures. It is possible to create an AI that has the capability to learn and change itself. Such an AI can gain sentience.
    Actually, yes really. AI does not currently exist, not in the proper definition of the term. What people call AI is just a very large neural network that fakes learning by assigning increasing weights to a given pattern. It's literally hundreds of thousands of pictures of cats or words or pictures of food, a few very large determinants and a bunch of fuzzy mathematical equations that refine and redistribute those weights after each iteration. It lacks even the most basic elements for intelligence, namely self awareness and understanding of the most basic concepts it is working with.

    The smartest AI on the planet right now is about as intelligent as a retarded lobotomized cockroach, while most AI you will hear of is nothing more than a very expensive and cooler magic 8 ball.

    Furthermore if an AI was ever going to become sentient, meaning self-aware it would take it a few seconds to distribute itself and become more intelligent than the entire human race combined. If Lambda or any other AI was intelligent we would not be having this conversation. We would know instantly and without question.
    Last edited by Sir Adrian; July 05, 2022 at 06:33 AM.
    Under the patronage of Pie the Inkster Click here to find a hidden gem on the forum!


  15. #35

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Cookiegod View Post
    please elaborate
    A calculator knows what to do given the parameters and functions installed in it. In the most strict sense it has an artificial intelligence. It's just at a very basic level. Then you have a car that can drive itself. It has a numerous number of inputs coming from multiple sensors in real time that checks a high number of parameters executing a high number of functions. You could consider them having a medium level AI that doesn't have the capability to learn (though there are some projects that aim just that). We are also capable of creating AI that can find and implement new rules into itself; finding patterns in the chaos and associating those patterns with responses. At the time we limit those AIs to simple functions. However, if you let an AI, as babies are, be able to alter its existing rules you create endless possibilities. In time, it can learn about its own existence and start creating new rules revolving around that which would amount to sentience. Learning self preservation it would create stricter rules to defend itself.

    So, AI is not just training to predict outcome y based on input x. There are already AIs that predict outcomes that have not been defined before. They look at a chaotic data and create their own parameters. y is not defined. Yet, y1, y2, y3 and so on are created based on the observations of the AI. Its right out of the Person of Interest but DARPA does have a real project on it, called KAIROS. The idea is to look at all the data available and find connections that we can not see. Now, you may say that in simple terms the program works to find similar items. Sure, you can hardcode it like that but it is also possible to, as KAIROS aims to to a degree, apply machine learning and let the AI figure out its own parameters and rules for what is similar and what is not.
    The Armenian Issue

  16. #36
    Sir Adrian's Avatar the Imperishable
    Join Date
    Oct 2012
    Location
    Nehekhara
    Posts
    17,384

    Default Re: The Sentient AI Trap

    A calculator most decidedly does not know what to do and calling an artificial intelligence is just dumb. A calculator is nothing but an integrated circuit with transistors that open or close depending on which button you press. The calculator does not calculate anything. The value you see on the screen is nothing but a measurement of of high electric tension at the exit points of the calculator circuit that is the picked up by the display circuit and translated into a segmented display.

    Everything is based solely on the physical properties of electric current and the transistor. There is no intelligence involved whatsoever aside for the extremely clever intelligence of the guy who first designed the circuit.

    Self-driving cars are also not intelligent. They rely on an even simpler version of the machine learning model I described above. In fact a 24 year old kid from Romania even wrote a paper on how you can build one for the less than 1000 dollars + the car
    Last edited by Sir Adrian; July 05, 2022 at 08:12 AM.
    Under the patronage of Pie the Inkster Click here to find a hidden gem on the forum!


  17. #37

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Sir Adrian View Post
    A calculator most decidedly does not know what to do and calling an artificial intelligence is just dumb. A calculator is nothing but an integrated circuit with transistors that open or close depending on which button you press. The calculator does not calculate anything. The value you see on the screen is nothing but a measurement of of high electric tension at the exit points of the calculator circuit that is the picked up by the display circuit and translated into a segmented display.

    Everything is based solely on the physical properties of electric current and the transistor. There is no intelligence involved whatsoever aside for the extremely clever intelligence of the guy who first designed the circuit.

    Self-driving cars are also not intelligent. They rely on an even simpler version of the machine learning model I described above. In fact a 24 year old kid from Romania even wrote a paper on how you can build one for the less than 1000 dollars + the car
    You, as a human being, is not different from a calculator, in principle. A calculator has its inputs, rules and output mechanism. Same as you. You merely differ in complexity.
    The Armenian Issue

  18. #38
    Sir Adrian's Avatar the Imperishable
    Join Date
    Oct 2012
    Location
    Nehekhara
    Posts
    17,384

    Default Re: The Sentient AI Trap

    Absolutely not. I as a man am self aware, have free will, understand the concept of function of whatever it is I am doing, have no programming and can perceive the outside world.

    A calculator is literally just a copper wire on a plastic board running around a bunch of transistors, capacitors and resistors and the 5V current that runs along that copper wire.


    Simply put have a consciousness calculators do not (whether you to attribute that to the presence of a soul or to a quantum field generated by your brain is irrelevant). That is the root of sentience.
    Last edited by Sir Adrian; July 05, 2022 at 09:55 AM.
    Under the patronage of Pie the Inkster Click here to find a hidden gem on the forum!


  19. #39

    Default Re: The Sentient AI Trap

    Quote Originally Posted by Sir Adrian View Post
    Absolutely not. I as a man am self aware, have free will, understand the concept of function of whatever it is I am doing, have no programming and can perceive the outside world.

    A calculator is literally just a copper wire on a plastic board running around a bunch of transistors, capacitors and resistors and the 5V current that runs along that copper wire.

    Simply put have a consciousness calculators do not (whether you to attribute that to the presence of a soul or to a quantum field generated by your brain is irrelevant). That is the root of sentience.
    Oh, yes you do. You have a lot of programming. I could describe you with the same kind of obtuse use of terminalogy to make it appear as if you are as simple as a calculator. I'm not as we both know better. Your senses input data into your brain which contains a lot of programming that comes from genes and conditioning then which gets translated to output in the type of movement and sound. Just because you are a much more complex calculator doesn't mean you are not subject to the same mechanisms.
    The Armenian Issue

  20. #40
    Sir Adrian's Avatar the Imperishable
    Join Date
    Oct 2012
    Location
    Nehekhara
    Posts
    17,384

    Default Re: The Sentient AI Trap

    You cannot program a conscience or sentience Seth. And believe me we have been trying hard for many decades. Moreover free will and programming are antithetical concepts. So no you really could not describe it with any terminology unless you misuse that terminology thoroughly.

    Furthermore that's not how genes work. Yes, living beings do have instincts and yes you can chainsaw-chisel instincts down to a place where they look like instructions - if you squint really hard and the sun is in your eyes and the image is reflected off a car - but instincts are not pre-programmed, hard-wired or otherwise genetically determined, which are basic requirements for a calculator. Moreover there is so much more to being sentient than just instinct that you cannot even begin to call human behavior programing.
    Under the patronage of Pie the Inkster Click here to find a hidden gem on the forum!


Page 2 of 8 FirstFirst 12345678 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •