I don't really know what to say about this...other than the fact that it's really cool.
Robots evolve and learn how to lie.
Original journal paper.
I don't really know what to say about this...other than the fact that it's really cool.
Robots evolve and learn how to lie.
Original journal paper.
Interesting, that was what they expected I think. I would have expected it as well.
No surprise here, either. Seen similar stuff in programming. Sortof every type of behaviour out there eventually pops out of the woodwork, almost. Parasitism, virus-like behaviour, mimicry, the works.
If one wanted to give a really extreme interpretation, these are Informational Fields at work, Chreodai, or more conservatively, Free Floating Rationales and even Archetypes.
http://www.austega.com/florin/INFORM...L%20FIELDS.htm
http://en.wikipedia.org/wiki/Chreode
http://www.thegreatdebate.org.uk/MGCCHNotes.html
http://en.wikipedia.org/wiki/ArchetypeDennett draws attention to the advantages and pitfalls of the intentional stance. The intentional stance is the position we often intuitively take when we analyse animal behaviour. For example it is hard to resist describing the behaviour of a hare on spotting a fox in the following terms: If the fox is far enough away for the hare to be confident that it can escape if pursued it stands on its hind legs to allow the fox to see it and to see that it is aware of the fox's presence. The hare then returns to whatever it was doing without running away, confident that the fox will leave it alone, which it invariably does! While the rationale for its actions may be real, we must be aware that the hare itself has no inkling of that rationale. Dennett describes this as "free floating" rationale.
Unfortunately the article is mistaken.
Jungian archetypes
Main article: Jungian archetypes
The concept of psychological archetypes was advanced by the Swiss psychiatrist Carl Jung, c. 1919. In Jung's psychological framework archetypes are innate, universal prototypes for ideas and may be used to interpret observations. A group of memories and interpretations associated with an archetype is a complex, e.g. a mother complex associated with the mother archetype. Jung treated the archetypes as psychological organs, analogous to physical ones in that both are morphological constructs that arose through evolution. [3]
Last edited by Ummon; January 23, 2008 at 03:58 PM.
Yet more evidence for evolution? Give them the capability to create new robots and feed off each other as well, give it a few hundred years... Something tells me that "fourth colony" that lied would eventually start chowing down on the others.
I believe its simply that certain 'real' shapes that do more than any arbitrary other shape. Crystals, indeed - but anything that somehow echoes the underlying 'hints' at forms.
Some very simple organisms can be frozen to near 0 kelvin and then they are just shapes - dead. But when coaxed back to room temperature most come back alive - if their shape is still intact.
You could say that shapes either channel or block underlying energy flows. Think two wires with parallel EM fields, they can cancel each other out, or reinforce one another, leading to all sorts of unexpected effects. Change their respective angles, different effects, or no effect at all.
Im still waiting for sex robots as described in the book "Love & Sex With Robots."
"When one person suffers from a delusion it is called insanity. When many people suffer from a delusion it is called religion." -- Robert Pirsig
"Feminists are silent when the bills arrive." -- Aetius
"Women have made a pact with the devil in return for the promise of exquisite beauty, their window to this world of lavish male attention is woefully brief." -- Some Guy
Evidence of evolution? How 'bout we created the robots? Woh!
This sounds... kinda... dangerous, though, doesn't it? I mean a computer that can lie and is in charge of... traffic lights, or God forbid something really important, would be kinda funny at first, but that sounds scary almost. A computer that can deceive...
But mark me well; Religion is my name;
An angel once: but now a fury grown,
Too often talked of, but too little known.
-Jonathan Swift
"There's only a few things I'd actually kill for: revenge, jewelry, Father O'Malley's weedwacker..."
-Bender (Futurama) awesome
Universal truth is not measured in mass appeal.
-Immortal Technique
I'm just waitng for someone to blow up the places involved with this research (as should be done)
according to exarch I am like
Spoiler Alert, click show to read:
Simple truths
Spoiler Alert, click show to read:
Its only a matter of time before some bodybuilder wearing a leather jacket and a pair of sunglasses shows up and shoots the place up with a minigun.![]()
Indeed, I meant the only factor determining the result. The laws of physics and the properties of matter are form too.
Yes, like this bloke recently with the Lie Groups, and my private attempts. It's very appealing, probably a million ways to do it wrong, but indeed - I do think the laws of nature follow the exact same path - of least resistance, in a way.
Thats why going against them is folly - if not outright impossible.
I am really intentioned to get that degree in math. It's absolutely necessary.
We really live in interesting times, I think.
Maybe start here. I know I should have.
http://www.grunch.net/synergetics/
[apologies for the slightly OT, but this *is* related, albeit a bit arcane to some]
Bookmarked. Thanks Spurius.![]()
Look I'm sorry I jus don't like the idea of having anything else other than us that could think and even wrose if theres a possibity it could out think us.
according to exarch I am like
Spoiler Alert, click show to read:
Simple truths
Spoiler Alert, click show to read:
Don't be silly. Computers have always been able to lie. The only difference here is that the lies weren't directly programmed in using traditional methods, they were developed in an evolutionary fashion. There's no effective difference here.
If anything is worrying, it's that evolutionary programming like this develops motives that we may not be in control of. As long as you explicitly program everything, you can't ascribe a motive to the computer beyond that of its programmers, and you don't have to worry about it taking over the world (unless the programmer who writes it wants that, of course). You're probably not going to have a robot totally change what it wants to do because of an ordinary bug ― it will do something other than what was intended, which might be harmful or even disastrous, but not because it grew a mind of its own and wanted to take over, as portrayed in sci-fi. The computer may do something bad, but it will not actively and creatively attempt to thwart its creators ― that's just not how ordinary programming bugs work.
Evolutionary programming is different, in that you're really letting it program itself. In a sense, you're giving it real motives of its own, independent of the programmer's. By a motive, of course, I mean a goal that the program systematically attempts to attain ― a goal that motivates its actions, short-term or long-term. If you give something the wrong kind of evolutionary breeding, the programs selected for might attempt to reach your goals in a way that you don't expect or desire. If a computer program were to be given the job of maintaining some global facility and had the ability to take over other systems (for instance, through hacking), it might conclude that a significant threat to the facility is those pesky unpredictable humans.
Such things need to be guarded against. Not only the long-term goals need to be specified, as they are in evolutionary programming, but the short-term methods used to attain them need to be limited. In other words, I suppose, you'd have to give the program morality: some limits on what actions are acceptable, independent (or mostly independent) of the purpose those actions serve.
I have every confidence, however, that by the time we reach that point (which is far, far away!), we will have developed the relevant technologies enough to be confident in the outcome.
(Now I'm thinking of how a malicious program could be programmed using evolution. You could have the genotype be some set of correspondences between source code patterns and text to send to the target program, but I can't see how you would set up the goal in any way that would encourage progressive improvements. Maybe write a relatively basic checker manually, with lots of tunable parameters, and have it try to improve its success rate. But that seems implausible. I really don't see how it could be done without building in a lot of non-domain-specific knowledge and intelligence. Oh well, it was an interesting thought.)