GiantRaven said:
FalloutJack said:
By my personal definition, a robot does not. An android (more complicated mechanism) is a different story.
Forgive my ignorance, but what is the difference between the two?
Now, this is just a personal opinion of mine, but my perception is that robots are best defined in Isaac Asimov's universe, where they have been advanced-but-limited in terms of development. Intelligent and capable, but will many times be hit with severely-compromising logic errors due to the complexities of the world versus their programming. Conflicts arisen within the Three Laws system Asimov developed proves that something as small as the wrong command can lead to numerous hazardous problems or at least brain-death for the robot. Only way to get around it was for robots to develop loop-holes (The Zeroeth Law, in the Asimov case). In other words, what can be a small development problem for a human being can sometimes be a daunting or horrific task to a robot.
Whereas, my definition of an android is that you develop a complex machine to emulate the most complex ways of thought and a physical status being as closely-functional to man as you can. The easiest example (and universally-accepted, I hope) would be Brent Spiner's role as Commander Data from Star Trek. He was incomplete in that a catastrophe interrupted the fullness of his development, but you could see that his intellect was largely an unhindered thing, except for where it hadn't been completed. An android is supposedly more open and capable of intuitive thought and reasoning. If you tell an android that an answer is wrong, it will not mechanically insist it is right, but rather question this thought and rethink its calculations.
This is how I would choose to reason it out in a manner that seems rational. If I'm wrong, no biggie.