Timewave Zero said:
Whereas I see your point, the human race is far too selfish to make itself fully obsolete.
There will always be a fail-safe or method of control. Not being able to control something really scares people, deep down.
There's always a certain amount of control in machines. Even if one had super advanced A.I, could reason and strategize, it would only be able to reason and strategize to the extent we programmed the A.I to.
Machines are already faster, stronger, more precise in movements and can calculate faster than the human brain. But we control all of this.
If we make a program to control these without human intervention, to know when and how to activate itself, how to control all of it's functions, to repair itself...we will still have an element of control. Somehow, we will retain a link, however small, of control over our creations because in the mind of the human race, we are superior to all things and unconsciously program this superiority into our mechanical creations so we can control them.
That's not *quite* true; AI's moving from its roots, where computers showed intelligence by following a script that allowed them to do stuff that looked intelligent, and towards algorithms where the system actually learns how to solve a problem while starting from nothing (i.e., given only the rules of Starcraft II, learn how to develop the best Zerg strategy: http://lbrandy.com/blog/2010/11/using-genetic-algorithms-to-find-starcraft-2-build-orders/; other examples exist).
Once one expands these algorithms to do stuff more technically challenging than Starcraft (which scientists already are/have done), then a program theoretically wouldn't eventually have a limit to how well it can reason; its limit would be its level of self-awareness. A self-aware program would be able to use the base algorithm to learn whatever it needed (from scratch) to do whatever it wants (including world domination); a program that isn't self-aware wouldn't know to try this, and would just use all those resources to do a task handed to it. And one can't exactly predict at what point an algorithm allows for self-awareness of that nature (not yet, anyway), nor is it something you just turn on and off with a boolean flag.
Having said that, I'm not really concerned about a near-future robot uprising; self-aware programs are probably a long way off, even at the speed AI's developing. But I wouldn't underestimate how far AI's advanced in the past few decades, or how far it may be 20, 50, or 100 years from now.