Originally Posted by
swabian
What is it supposed to be, why should it be realized, how are the ethical questions surrounding it to be parameterized.
The main argument of concern about it seems to be that an AI might "learn to feel", might "outgrow" us (as a human species). Why is that?
AI basically just means "learning, adaptive algorithms", which is already realized, btw. There is absolutely no need for it to have consciousness and self-awareness, like humans experience it. It is and will be a product that serves a purpose, just like a toaster does. They are potentially nothing more than another array of levers and tools available for us to pull, push and use however we see fit.
Which kind of future software requires real self-awareness and consciousness? None! The potential i see for those two dimensions is research, research and more research. Otherwise, outside of an isolated experimental frame, there is virtually no industrialisable application for feeling, self-conscious artificial creatures.
I'd furthermore argue, that to install sentient, artificial beings as slaves is exactly the same as abusing sentient biological creatures. It would be simply cruel and inefficient. Assuming there was any reason to give them a consciousness at all.
A consciousness isn't needed to process complex data, it might actually be in the way of that very purpose. Yes, maybe even human consciousness is overrated. However this may be, we have inherited this biological status quo (state of being) and we can't change it any time soon, even after the "AI-revolution". So, to hell with all those fears that AI may threaten the human being in and of itself. It could only do so, if we explicitly and deliberately created it to do just this. Which would be, outside of an isolated laboratory, all kinds of insane.
The actual problem with AI - how i see it - is that it's going to overthrow our current economical order. So let's discuss about this, i suggest.