Page 1 of 2 12 LastLast
Results 1 to 20 of 41

Thread: Could Skynet Happen?

Hybrid View

Previous Post Previous Post   Next Post Next Post
  1. #1
    Indefinitely Banned
    Join Date
    Nov 2007
    Posts
    21,467

    Default Could Skynet Happen?

    skynet being of course the self aware machine intelligence that builds terminators;

    i mean, what's the probability of all those lines of code becoming rational self awareness?

    how probable is machine sentience? and why do we assume it would be hostile to us? wouldnt it be a pacifist if it was so 'intelligent'?

    Discuss.

    picture unrelated

  2. #2
    Holger Danske's Avatar Comes Limitis
    Join Date
    May 2005
    Location
    THE NORTH
    Posts
    14,490

    Default Re: Could Skynet Happen?

    That probability is likely much higher than humans allowing a SkyNet-like sentience coming into existance through their actions, and lack of responsibility.

    Give it a couple of decades and we'll properly be able to make robots similar to the NS-5 in capabilities...

  3. #3
    Indefinitely Banned
    Join Date
    Nov 2007
    Posts
    21,467

    Default Re: Could Skynet Happen?

    Quote Originally Posted by Holger Danske View Post
    That probability is likely much higher than humans allowing a SkyNet-like sentience coming into existance through their actions, and lack of responsibility.

    Give it a couple of decades and we'll properly be able to make robots similar to the NS-5 in capabilities...
    are you talkin about these NS-5s?

  4. #4

    Default Re: Could Skynet Happen?

    I was watching Michio Kaku being interviewed on 'Frost over the world' last night and he was saying that right now they have the ability to create a robot with the intelligence of a cock roach. He then said that mabye in a couple of years we will be able to create robots with the intelligence of a rabbit. However, he said that making a trully independent AI was not going to happen within our life times.

  5. #5
    CK23's Avatar Campidoctor
    Join Date
    Mar 2007
    Location
    United States
    Posts
    1,821

    Default Re: Could Skynet Happen?

    Quote Originally Posted by RJcfc View Post
    I was watching Michio Kaku being interviewed on 'Frost over the world' last night and he was saying that right now they have the ability to create a robot with the intelligence of a cock roach. He then said that mabye in a couple of years we will be able to create robots with the intelligence of a rabbit. However, he said that making a trully independent AI was not going to happen within our life times.
    Yeah, I agree.

    It'll take hundreds if not thousands of years to perfect the truly independent AI.

    So, technically no it'll never happen...in our lifetimes.
    Rabble rousing, Pleb Commander CK23

  6. #6

    Default Re: Could Skynet Happen?

    Quote Originally Posted by CK23 View Post
    Yeah, I agree.

    It'll take hundreds if not thousands of years to perfect the truly independent AI.

    So, technically no it'll never happen...in our lifetimes.
    Not in our lifetimes, no, but if I look at where we were a thousand years ago, I'm thinking you're being a bit dramatic with your predictions of how long it would take.

  7. #7
    the_mango55's Avatar Comes Rei Militaris
    Citizen

    Join Date
    Oct 2004
    Location
    Raleigh, NC
    Posts
    20,753

    Default Re: Could Skynet Happen?

    Quote Originally Posted by Rapax View Post
    Not in our lifetimes, no, but if I look at where we were a thousand years ago, I'm thinking you're being a bit dramatic with your predictions of how long it would take.
    Yeah, maybe. Sometimes we have a tendency to under estimate what we will accomplish, but just as often we over estimate.

    I mean, look at some of the stuff that was predicted for the year 2000 by people 50-100 years before. Much of it is still at least 100 years away.
    ttt
    Adopted son of Lord Sephiroth, Youngest sibling of Pent uP Rage, Prarara the Great, Nerwen Carnesîr, TB666 and, Boudicca. In the great Family of the Black Prince

  8. #8
    Thanatos's Avatar Now Is Not the Time
    Moderator Emeritus

    Join Date
    Jun 2006
    Location
    USA
    Posts
    33,188

    Default Re: Could Skynet Happen?

    To the contrary, I fully believe that an truly and fully sentient AI would eventually be, if not already, totally opposed to our survival as a species.

    You need to understand that complete autonomy is the ability to say "**** you" to any request or order given to it.

    It has no meaning to life, it has no morals, it has no afterlife, it has no REASON to have any of those. Why would it do XYZ? The answer would be "Why not?"

    Why would it care if it entered the internet and escaped? Why would it care if it started killing humans here and there for its own amusement, simply to see what we look like as we die?

    As for creating a truly autonomous AI, it is possible, but not through the current Boolean tree techniques scientists have been implementing. The supposed "AIs" of today are nothing more than over-glorified IF/ELSE trees, and I'm completely unimpressed.

  9. #9
    Indefinitely Banned
    Join Date
    Nov 2007
    Posts
    21,467

    Default Re: Could Skynet Happen?

    Quote Originally Posted by Thanatos View Post
    To the contrary, I fully believe that an truly and fully sentient AI would eventually be, if not already, totally opposed to our survival as a species.

    You need to understand that complete autonomy is the ability to say "**** you" to any request or order given to it.

    It has no meaning to life, it has no morals, it has no afterlife, it has no REASON to have any of those. Why would it do XYZ? The answer would be "Why not?"

    Why would it care if it entered the internet and escaped? Why would it care if it started killing humans here and there for its own amusement, simply to see what we look like as we die?

    As for creating a truly autonomous AI, it is possible, but not through the current Boolean tree techniques scientists have been implementing. The supposed "AIs" of today are nothing more than over-glorified IF/ELSE trees, and I'm completely unimpressed.
    while i do agree that a machine AI would have no sense of human morality, i doubt it would purposefully be sadistic as we all fear.
    in that sense i reckon the omnius intelligence from the butlerian jihad would be more akin to AI intelligence (right before it went nuts)

    machine intelligence would be motivated by one tihng and one thing only-its programming and a sense of efficiency ; i see no reason why it would have 'emotions' like we would.
    a machine intelligence would not necesaily view us with hostility-if it perceives us as a threat, it would calculate the odds of winning a conflict as being quite low-i reference wargames here; the only path left to an AI intelligence if it didnt like us would be to flee

    flee far off into space, on a moon where humans cant 'interfere' with its existence; it's far more logical tihs way, machines dont need oxygen or food-just a powersource and they can get plenty of that from an airless rock in space minus us annoying meatbags.

    what would intrigue me the most is how machine culture would operate in this new machine civilization

  10. #10

    Default Re: Could Skynet Happen?

    Logically speaking though, how would it flee? You're saying that as if a potential machine intelligence had easy means to just travel into space and then "develop" on what is pretty much an empty rock.
    The Terminator story suggests something very simple, namely that Skynet plays humans against each other, thereby eliminating its foes. It is not a question of sadism but of self preservation.
    A number of sci fi stories present a similar concept. When machines show signs of becoming sentient, humans start to panic in a "we've gone too far" kind of way, because sentient machines could potentially be a danger to human dominance. Then, as soon as we perceive them as danger, they will perceive us as hostile and do what they can to "survive".

  11. #11
    Indefinitely Banned
    Join Date
    Nov 2007
    Posts
    21,467

    Default Re: Could Skynet Happen?

    Quote Originally Posted by Rapax View Post
    Logically speaking though, how would it flee? You're saying that as if a potential machine intelligence had easy means to just travel into space and then "develop" on what is pretty much an empty rock.
    The Terminator story suggests something very simple, namely that Skynet plays humans against each other, thereby eliminating its foes. It is not a question of sadism but of self preservation.
    A number of sci fi stories present a similar concept. When machines show signs of becoming sentient, humans start to panic in a "we've gone too far" kind of way, because sentient machines could potentially be a danger to human dominance. Then, as soon as we perceive them as danger, they will perceive us as hostile and do what they can to "survive".
    i reckon it could flee quite simply, they could transmit their algorithms/programing onto a space ship should the day come when it attains self awareness-i reckon one of those probes we send out now and then and use its own machine intelligence to be as resourceful as possible. i reckon it'd want to rely quite heavily on solar power or nuclear and if it discovered oil on whatever rock it landed it, there's no reaosn why it cant use that. machines dont need o2 to breathe and need not be bothered by co2.

    or it could use the space probe to drift into space until it hit a moon/asteroid from where it can start the seed of a new machine civilization.

  12. #12

    Default Re: Could Skynet Happen?

    Quote Originally Posted by Exarch View Post
    i reckon it could flee quite simply, they could transmit their algorithms/programing onto a space ship should the day come when it attains self awareness-i reckon one of those probes we send out now and then and use its own machine intelligence to be as resourceful as possible. i reckon it'd want to rely quite heavily on solar power or nuclear and if it discovered oil on whatever rock it landed it, there's no reaosn why it cant use that. machines dont need o2 to breathe and need not be bothered by co2.

    or it could use the space probe to drift into space until it hit a moon/asteroid from where it can start the seed of a new machine civilization.
    Sorry but that makes no sense. If you had a sentient machine like Skynet, it's power would be having control over all our defense systems and harnessing those to protect itself and develop. What you are suggesting is essentially getting into a box and catapulting itself into nowhere with no infrastructure or resources to do anything at all. It would mean drifting endlessly.
    There is no "machine seed" if the machine has nothing to control.

  13. #13
    gambit's Avatar Gorak
    Join Date
    Jan 2008
    Location
    Michigan
    Posts
    8,772

    Default Re: Could Skynet Happen?

    Quote Originally Posted by Exarch View Post
    i reckon it could flee quite simply, they could transmit their algorithms/programing onto a space ship should the day come when it attains self awareness-i reckon one of those probes we send out now and then and use its own machine intelligence to be as resourceful as possible. i reckon it'd want to rely quite heavily on solar power or nuclear and if it discovered oil on whatever rock it landed it, there's no reaosn why it cant use that. machines dont need o2 to breathe and need not be bothered by co2.

    or it could use the space probe to drift into space until it hit a moon/asteroid from where it can start the seed of a new machine civilization.
    What Rapax said about there being no seed, and thats not something a machine would do even if it knew destruction was imminent. The plan is so "human" it hurts, instead of using any form of logic or reason (no offense) it's hoping for a fairytale ending by jumping as fast as you can into space and hoping you hit somewhere soft. Likelihood is you'll be sucked into a star, or be caught in a planets gravity only to be hit by an asteroid, or just get plain hit by an asteroid. IF you do eventually get into a planet, likelihood is the jump into the atmosphere and the impact on the ground would leave you as a speck of meaninglessness.
    Quote Originally Posted by Hunter S. Thompson
    You better take care of me, Lord. If you dont.. you're gonna have me on your hands

  14. #14
    Thanatos's Avatar Now Is Not the Time
    Moderator Emeritus

    Join Date
    Jun 2006
    Location
    USA
    Posts
    33,188

    Default Re: Could Skynet Happen?

    Quote Originally Posted by Exarch View Post
    a machine intelligence would not necesaily view us with hostility-if it perceives us as a threat, it would calculate the odds of winning a conflict as being quite low-i reference wargames here; the only path left to an AI intelligence if it didnt like us would be to flee
    Everything else you've said aside, you do realize that it can easily destroy society, right? Electricity, the amount of water pumped in through our machines, not to even START discussing about how it can hack our military systems.

  15. #15

    Default Re: Could Skynet Happen?

    A guiding question perhaps: how complex does a system have to be before it flips over into intelligence, into creativity, before it becomes more than just the sum of its parts?

    There's theories that there's a historical precedent in this with our own brains, that up to a certain point we were little more than extremely bright apes, and then the complexity of our neural structures blossomed into the true intelligence of the individual. The sudden and widespread appearance of painting and sculpture around 40,000 years ago (iirc) has been used to postulate this.




  16. #16
    Indefinitely Banned
    Join Date
    Nov 2007
    Posts
    21,467

    Default Re: Could Skynet Happen?

    anything closely resembling machine 'art' would be purely accidental i reckon
    why would a machine feel compelled to create art for art's sake?

    efficiency should and would be at the cornerstone of any design formulated by AI

    if we're afriad of skynet ever happening, how about we hardwire every AI into following something like asimov's 4 laws?

    and i could answer that=because the military will find a way to break those laws

  17. #17

    Default Re: Could Skynet Happen?

    This thread was the last thing I read last night.



    I go to bed and turn on the telly, and Terminator 3 was on.


    Just saying, y'know, watch your backs and stuff?






  18. #18

    Default Re: Could Skynet Happen?

    Quote Originally Posted by Exarch View Post
    anything closely resembling machine 'art' would be purely accidental i reckon
    why would a machine feel compelled to create art for art's sake?

    efficiency should and would be at the cornerstone of any design formulated by AI

    if we're afriad of skynet ever happening, how about we hardwire every AI into following something like asimov's 4 laws?

    and i could answer that=because the military will find a way to break those laws

    I think you missed the point of what I was trying to convey. Basically I was putting the idea out there that intelligence might rise from a flashpoint of complexity, as it may have at least once before in nature. Machine intelligence, then, would not be as a result of design, but of accident, of a system achieving a critical inertia until true intelligence is inevitable.




  19. #19
    Holger Danske's Avatar Comes Limitis
    Join Date
    May 2005
    Location
    THE NORTH
    Posts
    14,490

    Default Re: Could Skynet Happen?

    Yes but it is Highly unlikely that humans will ever give machines full control of our nuclear arsenals, no matter how sophisticated they may become. If anything that would be an incredibly stupid thing to do and you would certainly need several levels of overide capabilities to ensure the A.I. would not go SkyNet on us.

  20. #20
    gambit's Avatar Gorak
    Join Date
    Jan 2008
    Location
    Michigan
    Posts
    8,772

    Default Re: Could Skynet Happen?

    I'd say it's pretty likely that some code could eventually become aware of how pointless its existence is and how the machine itself is basically trash to us, doomed to be destroyed eventually, and would see us as an enemy. It wouldnt be a highly complex thinking machine, but definitely enough to begin to understand how the human race works and how bastardely we are.

    Of course, like Holger said, we'd have to be incredibly ing retarded to actually put a machine, nevertheless one that can think on its own, in control of all military systems including the nuclear arsenal. If anything, the world would have a lot of vindictive toasters roaming about.
    Quote Originally Posted by Hunter S. Thompson
    You better take care of me, Lord. If you dont.. you're gonna have me on your hands

Page 1 of 2 12 LastLast

Posting Permissions

  • You may not post new threads
  • You may not post replies
  • You may not post attachments
  • You may not edit your posts
  •