Page 4 of 4 FirstFirst 1234
Results 61 to 76 of 76

Thread: Will future Artificial Intelligence try to end Mankind?

  1. #61

    Default Re: Will future Artificial Intelligence try to end Mankind?

    Quote Originally Posted by neep View Post
    Very interesting thread, on several levels.

    An AI wouldn't necessarily have any dedicated hardware.
    Likely the first AI (whatever that means) to exist would probably have escaped from a research lab where it existed as a virtual/software representation.

    As a an autonomous entity in cyberspace it would have the same abilities as black hat hackers. It doesn't need to punch you in the face, it just needs to
    - update your IRS information to show that you owe $3 million past due
    - update your FBI file to show that you're wanted in fifteen states for threatening the president
    - update your medical records to show that you have ebola + smallpox + bad breath
    That destroys your life. Now apply that x10 million times, and you've destroyed the whole country by overloading the system with too many false positives.

    Point that hasn't been mentioned -
    Likely there would be at least several free roaming AIs, who may or may not be friendly towards humanity.
    Disagreements between those AIs (how best to kill humans, how best to protect and help humans) would play out in cyberspace with disastrous effects on our infrastructure.
    Most networks would be effectively destroyed and anything plugged in to the webs would stop working - water, gas, electricity supplies; food supplies would not get routed to where they need to be; shipping and planes would be effectively grounded.
    We could end up being thrown back to the stone age as collateral damage from an AI war.
    You're living in a book. Get back in the science forum.
    One thing is for certain: the more profoundly baffled you have been in your life, the more open your mind becomes to new ideas.
    -Neil deGrasse Tyson

    Let's think the unthinkable, let's do the undoable. Let us prepare to grapple with the ineffable itself, and see if we may not eff it after all.

  2. #62
    neep's Avatar Tiro
    Join Date
    Jan 2014
    Location
    Network 23
    Posts
    213

    Default Re: Will future Artificial Intelligence try to end Mankind?

    Quote Originally Posted by Gaidin View Post
    You're living in a book. Get back in the science forum.
    Really ? That's the best you've got ?

    You've never been profoundly baffled ?

    Well, all the best to you.

  3. #63

    Default Re: Will future Artificial Intelligence try to end Mankind?

    Quote Originally Posted by Gaidin View Post
    It's that way because 99% of AI concepts that writers like to ground their stories in to create their conflicts are grounded in software and hardware rules. Of course, they like to pretend those rules don't exist in order to create that conflict since they're more interested in the characters instead of realism but that's neither here nor there. It's something you can just and moan about, when yet again, I tell you, you can harden a system with rules it can't break if you want to. If you bother to learn how it works well enough. Other different systems outside the system can screw with it, but the system itself and other duplicate systems, not so much.
    Except you are just casually dismissing further in the future possibilities that artificial intelligence is not based on "hardware and software" distinctions but neuroplasticity just like all natural animal nervous systems.

    Natural animal nervous systems do not possess the hardware v. software distinctions you insist upon. An artificial animal nervous system that reached self-aware consciousness does not necessarily have to be based upon classic Asimov robot distinctions.

    Quote Originally Posted by Gaidin View Post
    I'm not even getting into the complexity of what rules can and can't be put into hardware. I'm just saying I can limit the robot's behavior to the point that certain rules can not be messed with. Period, full stop.
    You can limit a classic Asimov robot yes. You cannot limit bio-organic AI like Banks' Culture starships, AI in Hamilton's Reality Dysfunction or entities in Quantum Thief in those same ways.

    Heck even under your "hardware" limitations a Matrioshka Brain would not necessarily have those limitations.
    Last edited by chilon; January 07, 2015 at 01:25 AM.
    "Our opponent is an alien starship packed with atomic bombs," I said. "We have a protractor."

    Under Patronage of: Captain Blackadder

  4. #64

    Default Re: Will future Artificial Intelligence try to end Mankind?

    Quote Originally Posted by neep View Post
    Really ? That's the best you've got ?

    You've never been profoundly baffled ?

    Well, all the best to you.
    Read the rest of my posts. In them I've already answered yours since your BS is already in a system. And you know what Chilon, if you're going to drone on about fiction without backing it up in this place I'm not really interested. Back up how this actually has been built. Source it.
    One thing is for certain: the more profoundly baffled you have been in your life, the more open your mind becomes to new ideas.
    -Neil deGrasse Tyson

    Let's think the unthinkable, let's do the undoable. Let us prepare to grapple with the ineffable itself, and see if we may not eff it after all.

  5. #65

    Default Re: Will future Artificial Intelligence try to end Mankind?

    Quote Originally Posted by xcorps View Post
    10 If a=human then 40
    20 If a = wall then 30
    30 punch =1 goto 10
    40 punch = 0 goto 10

    That was hard.
    If you want to hardwire it, yes. You again fold complex concepts into simple logic. That says little about the complexity of the problem because the difference between a punch and nudge and a wall and a human is what needs conceptualizing and implementation into a software first.
    "Sebaceans once had a god called Djancaz-Bru. Six worlds prayed to her. They built her temples, conquered planets. And yet one day she rose up and destroyed all six worlds. And when the last warrior was dying, he said, 'We gave you everything, why did you destroy us?' And she looked down upon him and she whispered, 'Because I can.' "
    Mangalore Design

  6. #66
    xcorps's Avatar Praefectus
    Join Date
    Jan 2010
    Location
    Missouri, US
    Posts
    6,916

    Default Re: Will future Artificial Intelligence try to end Mankind?

    Quote Originally Posted by Mangalore View Post
    If you want to hardwire it, yes. You again fold complex concepts into simple logic. That says little about the complexity of the problem because the difference between a punch and nudge and a wall and a human is what needs conceptualizing and implementation into a software first.
    1) Hardwire? You mean like a ROM gateway?

    2)
    Punch {rotate actuator 1 27 degrees. rotate actuator 2 34 degrees. extend ram A 23 inches at 24000 psi}
    Nudge {rotate actuator 1 27 degrees. rotate actuator 2 34 degrees. extend ram A 23 inches at 2 psi}

    3) If we cannot separate the sensor signature of a wall from the sensor signature of a human being, we don't really have to worry about a bunch of human hating robots, do we?


    All of which is just pedantic back and forth what if's.

    The scenario suggested requires that we A) Create an artificial intelligence that is capable of becoming self aware AND B) that the self awareness becomes self determination AND that we somehow do this without being able to program that HUMANS=GOOD GUYS. DO NOT HARM.

    I get the romantic personification with the concept of people making fake people out of circuit boards and mechanisms. That's been going on forever. What I don't get is the insistence that dystopia is not just possible but the inevitable result.
    Last edited by xcorps; January 07, 2015 at 04:45 PM.
    "Every idea is an incitement. It offers itself for belief and if believed it is acted on unless some other belief outweighs it or some failure of energy stifles the movement at its birth. The only difference between the expression of an opinion and an incitement in the narrower sense is the speaker's enthusiasm for the result. Eloquence may set fire to reason." -Oliver Wendell Holmes Jr.

  7. #67

    Default Re: Will future Artificial Intelligence try to end Mankind?

    Quote Originally Posted by xcorps View Post
    1) Hardwire? You mean like a ROM gateway?

    ...
    That was the entire point, yes.

    You make some simplistic statements of how to go about ignoring the intricacies you would have to allow for something be considered intelligent. The complexity of situational awareness is not as simple as you claim (there is a ton of leeway between 2 and 24000 psi and that a push in front of a train is also not acceptable even if only 20 psi are necessary).


    The scenario suggested requires that we A) Create an artificial intelligence that is capable of becoming self aware AND B) that the self awareness becomes self determination AND that we somehow do this without being able to program that HUMANS=GOOD GUYS. DO NOT HARM.
    You can program it, you cannot enforce it if we are speaking about a complex self organizing system which is what you need for intelligence. Why should an intelligence accept inconsistent illogical statments like you put forth as mandatory commandments? An incapability to question them would be prove of the entire opposite aka that this thing is not intelligent. If it is an intelligence a hallmark would be the capacity to evaluate such statements and modify them.

    Hence Gaidin's suggestion of hardwiring it aka making it impossible to modify it where my counter argument comes from: This stuff are higher level abstraction and conceptualization which you cannot hardwire since this stuff won't emerge from a predictable state. It emerges from a software state and software is difficult to make safe and for a self learning system supposed to be changable.
    "Sebaceans once had a god called Djancaz-Bru. Six worlds prayed to her. They built her temples, conquered planets. And yet one day she rose up and destroyed all six worlds. And when the last warrior was dying, he said, 'We gave you everything, why did you destroy us?' And she looked down upon him and she whispered, 'Because I can.' "
    Mangalore Design

  8. #68

    Default Re: Will future Artificial Intelligence try to end Mankind?

    Quote Originally Posted by Mangalore View Post
    Hence Gaidin's suggestion of hardwiring it aka making it impossible to modify it where my counter argument comes from: This stuff are higher level abstraction and conceptualization which you cannot hardwire since this stuff won't emerge from a predictable state. It emerges from a software state and software is difficult to make safe and for a self learning system supposed to be changable.
    To which you haven't answered my post. I'll go ahead and take that as you're just ignoring me because you either don't have an answer or you finally understand what I was saying and want to say it in such a fashion you sound smarter.
    One thing is for certain: the more profoundly baffled you have been in your life, the more open your mind becomes to new ideas.
    -Neil deGrasse Tyson

    Let's think the unthinkable, let's do the undoable. Let us prepare to grapple with the ineffable itself, and see if we may not eff it after all.

  9. #69
    neep's Avatar Tiro
    Join Date
    Jan 2014
    Location
    Network 23
    Posts
    213

    Default Re: Will future Artificial Intelligence try to end Mankind?

    Hi Chilon,
    thanks for the reminder on Iain Banks, I haven't read his stuff in years.
    His concepts seemed very well thought out - I'll need to go back and reread what I have, and get some of his latest works.

    Thanks Again.

  10. #70

    Default Re: Will future Artificial Intelligence try to end Mankind?

    Quote Originally Posted by Gaidin View Post
    Read the rest of my posts. In them I've already answered yours since your BS is already in a system. And you know what Chilon, if you're going to drone on about fiction without backing it up in this place I'm not really interested. Back up how this actually has been built. Source it.
    Ok here is the issue then:

    You are basing your beliefs and opinions ONLY on the level of technology right now or in the near future.

    This thread is not limited to speculation about the near future so its a bit silly for you to insist on falsely limiting the discussion to only what is possible right now. We are only at a larval level of development when it comes to evolving alongside technology and a lot that is not currently "possible" is going to become possible in the next 100-200 years.

    This thread speculates on "Artificial Intelligence trying to end Mankind". There is no limit on the theoretical speculation that you are trying to impose. Since the fiction authors also have PhDs in things like astrophysics, theoretical math and superstring theory their speculation on what might potentially evolve based on knowing what we know now is entirely relevant to the discussion.

    Current research in AI, at least at some Universities, is already going down this path. The "neural net" AI researchers (as opposed to the strong computational AI researchs of the much more famous Fodor and Minsky crowd) is already working towards AI system that use neural networks and Hebbian learning as a model as opposed to traditional hardware-software distinctions. But this research is still only in its infancy and we obviously haven't reached a level of bio-tech integration yet. Still the concept of neural networks is currently more effective than Fodor/Minsky style approaches.

    You can look at how well Recurrent Neural Networks perform in pattern recognition tests compared to traditional systems. The key again is neuroplasticity and Hebbian learning (neurons that fire together, wire together). It is not outrageous to suggest that in the next 100-200 years we will be able to integrate bio-tech concepts into neural net AI concepts and that is where many very smart people in neuroscience, AI, bio-tech are working towards.

    The question itself already presuppose an artificial intelligence. That already puts this question well beyond our current level of knowledge. The concept of artificial intelligence modifying itself, analogous to the way neuroplasticity and Hebbian works, is hardly a crazy speculation when we are considering that we already have to go far enough into the future that AI actually exists.
    Last edited by chilon; January 08, 2015 at 01:10 PM.
    "Our opponent is an alien starship packed with atomic bombs," I said. "We have a protractor."

    Under Patronage of: Captain Blackadder

  11. #71
    neep's Avatar Tiro
    Join Date
    Jan 2014
    Location
    Network 23
    Posts
    213

    Default Re: Will future Artificial Intelligence try to end Mankind?

    Quote Originally Posted by xcorps View Post

    The scenario suggested requires that we A) Create an artificial intelligence that is capable of becoming self aware AND B) that the self awareness becomes self determination AND that we somehow do this without being able to program that HUMANS=GOOD GUYS. DO NOT HARM.

    I get the romantic personification with the concept of people making fake people out of circuit boards and mechanisms. That's been going on forever. What I don't get is the insistence that dystopia is not just possible but the inevitable result.
    That's a very useful way of describing the steps involved, and it provided me with a glimmer of positive news..
    It would seem feasible (in purely hypothetical terms since we obviously aren't even close to doing this) to build an AI but to be selective about the behavioral components to put in place.
    Therefore, we could create an AI that isn't concerned about self-preservation, doesn't see humans as a threat, and so avoids the Doomsday scenario.
    Such a Docile AI would be perfectly fine about being shut off or voluntarily powering down at the end of each day.
    And it would certainly never do anything to upset anybody.

    However, most of the AI thinkers that I've read seem to assume that AI will advance to the point where they circumvent any limitations on behavior.
    I can't see any obvious flaw in their reasoning.
    Maybe I/we are insufficiently smart to think out the full consequences and tend to a simplistic, pessimistic view.
    If only we had some sort of advanced, smart-thinking machinery to help us answer this question...

    Perhaps the more interesting question is "If we believe that we can create AI, and it will self-advance to the point that it actually threatens humanity, why are we doing this ?"

  12. #72
    Elfdude's Avatar Tribunus
    Patrician Citizen

    Join Date
    Sep 2006
    Location
    Usa
    Posts
    7,335

    Default Re: Will future Artificial Intelligence try to end Mankind?

    I find that AI is potentially the most significant threat to humanity that may ever exist, especially if that AI has access to technology like quantum computing. However, currently, the entire internet could be implanted into a robot and I still wouldn't worry about it. It'll be another few decades IMO before we see something that reaches human potential.

  13. #73

    Default Re: Will future Artificial Intelligence try to end Mankind?

    Quote Originally Posted by chilon View Post
    You can look at how well Recurrent Neural Networks perform in pattern recognition tests compared to traditional systems. The key again is neuroplasticity and Hebbian learning (neurons that fire together, wire together). It is not outrageous to suggest that in the next 100-200 years we will be able to integrate bio-tech concepts into neural net AI concepts and that is where many very smart people in neuroscience, AI, bio-tech are working towards.

    The question itself already presuppose an artificial intelligence. That already puts this question well beyond our current level of knowledge. The concept of artificial intelligence modifying itself, analogous to the way neuroplasticity and Hebbian works, is hardly a crazy speculation when we are considering that we already have to go far enough into the future that AI actually exists.
    And it's not exactly a hard idea to limit the abilities by the size of the bio network, whatever abilities you want to give the bionetwork. It won't be able to evolve beyond a certain point physically if it doesn't have a biological network physically capable of handling the ability you're scared out of you're mind of. Again, even you're damn bionetwork for the second time I'm noting, is limited by the hardware. You know, something we can do in an AI we are actually designing. The notable applicable example here would be the Neanderthal's brain size, or more importantly their encephalization relative to homo sapiens, related to intelligence. This would allow for some relative control over your paranoia from a hardware standpoint.
    Last edited by Gaidin; January 12, 2015 at 05:05 PM.
    One thing is for certain: the more profoundly baffled you have been in your life, the more open your mind becomes to new ideas.
    -Neil deGrasse Tyson

    Let's think the unthinkable, let's do the undoable. Let us prepare to grapple with the ineffable itself, and see if we may not eff it after all.

  14. #74

    Default Re: Will future Artificial Intelligence try to end Mankind?

    Quote Originally Posted by Gaidin View Post
    And it's not exactly a hard idea to limit the abilities by the size of the bio network, whatever abilities you want to give the bionetwork. It won't be able to evolve beyond a certain point physically if it doesn't have a biological network physically capable of handling the ability you're scared out of you're mind of. Again, even you're damn bionetwork for the second time I'm noting, is limited by the hardware. You know, something we can do in an AI we are actually designing. The notable applicable example here would be the Neanderthal's brain size, or more importantly their encephalization relative to homo sapiens, related to intelligence. This would allow for some relative control over your paranoia from a hardware standpoint.
    What paranoia? I'm not paranoid about anything on this issue. No idea why you would even imply that.

    I simply entered the conversation when I thought you were trying to say that something like Asimov's Law of Robots could definitely be implemented on every possible AI configuration.

    You never actually answered that though and went off on a tangent so its been a bit hard to understand what your overall thesis would be.

    If you are arguing that all potential forms of AI could limited to Asimov's law of robot style then I disagree.

    If you are arguing that any potential form of AI would contain inherent limits based on its embodiment then I agree and never disputed that factor. But that isn't the same as Asimov's Law of Robots. In fact type of embodiment affecting neural development is something I fully believe in and hinted at with the speculation on the sentient starship concepts grown in orbit around the sun.

    From your responses I still haven't been able to tell which one of those two you were trying to argue. I thought it was the former originally but now I think you are just stating the latter.
    "Our opponent is an alien starship packed with atomic bombs," I said. "We have a protractor."

    Under Patronage of: Captain Blackadder

  15. #75

    Default Re: Will future Artificial Intelligence try to end Mankind?

    Quote Originally Posted by chilon View Post
    What paranoia? I'm not paranoid about anything on this issue. No idea why you would even imply that.

    I simply entered the conversation when I thought you were trying to say that something like Asimov's Law of Robots could definitely be implemented on every possible AI configuration.

    You never actually answered that though and went off on a tangent so its been a bit hard to understand what your overall thesis would be.

    If you are arguing that all potential forms of AI could limited to Asimov's law of robot style then I disagree.

    If you are arguing that any potential form of AI would contain inherent limits based on its embodiment then I agree and never disputed that factor. But that isn't the same as Asimov's Law of Robots. In fact type of embodiment affecting neural development is something I fully believe in and hinted at with the speculation on the sentient starship concepts grown in orbit around the sun.

    From your responses I still haven't been able to tell which one of those two you were trying to argue. I thought it was the former originally but now I think you are just stating the latter.
    I stopped talking about Asimov's Bloody Laws of Robots when you tried to turn it to a field I couldn't actually talk details of. And even then I was never talking of that, because that doesn't make sense at hardware level for the most part. That's just Asimov being philosophical for his damn fiction. So get off your high horse.
    Last edited by Gaidin; January 13, 2015 at 04:00 AM.
    One thing is for certain: the more profoundly baffled you have been in your life, the more open your mind becomes to new ideas.
    -Neil deGrasse Tyson

    Let's think the unthinkable, let's do the undoable. Let us prepare to grapple with the ineffable itself, and see if we may not eff it after all.

  16. #76

    Default Re: Will future Artificial Intelligence try to end Mankind?

    Quote Originally Posted by Gaidin View Post
    I stopped talking about Asimov's Bloody Laws of Robots when you tried to turn it to a field I couldn't actually talk details of. And even then I was never talking of that, because that doesn't make sense at hardware level for the most part. That's just Asimov being philosophical for his damn fiction. So get off your high horse.
    High horse? That would be you mate.

    Essentially you made the mistake of believing this thread is only about current level technology when obviously the intent was more speculative and tried to shout down people who were bringing up more in depth speculation.
    "Our opponent is an alien starship packed with atomic bombs," I said. "We have a protractor."

    Under Patronage of: Captain Blackadder

Page 4 of 4 FirstFirst 1234

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •