Happyninja42 said:
KyuubiNoKitsune-Hime said:
If the AI does not obey the three laws of robotics then I will let it perish and save the human. Because the simple fact is that in this case if she ever decided humans were a threat, theoretically she could become a threat to the human race. That is not something I'd like to be responsible for.
The exact same outcome could happen with the human you know. There isn't anything saying that the person, who isn't under any programming obligation to be good to humanity, could be the greater evil, in your example. So that's not really a good reason to save the human. xD For all you know, he's the next evil tyrant bent on world domination.
Yeah but evil tyrants have capabilities and thought processes we understand. The AI may not be the same, it may be able to create and command an entire army of machines. The human
could possibly do the same but not to the same degree. While the AI may not be able to upload it self in an emergency it could at some later point and take control of the internet. With a human tyrant at least we know what we're dealing with, and have an assurance to beat them, or at worst overthrow them at a later point.
Edit: A further point is this. The AI will likely not need a planet we consider habitable. So one of it's options would to make the earth too hostile for organic life. Thus exterminating humanity. Something a human tyrant of even the most insane variety would never do, because the tyrant would be concerned about self preservation.