webkilla said:
Great vid
A thing about the Asimov rules: They suck - the movie Irobot showed that, as well as the webcomic Freefall which does a lot in robot ethics and the nature of sentience.
Don't... associate Asimov with that piece of shit movie I-Robot. The only thing they have in common is the 3 laws. And even that is
barely.
See, if a robot isn't allowed to let harm come to humans through action or inaction - then... well
How would they deal with a suicidal human?
He covers this.
The robot ceases to function. It can neither save the person's life, because it would cause mental harm. But by doing nothing, the robot would be allowing physical harm.
What if the robots discover that humans need air to breathe - and one human breathing in a mouthful of air means that other humans are denied that air.
See above.
Basically: Asimov's three laws do not take into account that humans tend to hurt each other - or one way or the other
Yes.. Yes he did. That is the entire point of the 3 laws in the Asimov Universe!
Hell, the first law doesn't define harm.
They kinda do.. If you read the robot series, you'll have a greater understanding of this.
How would a 3-law robot react to a tumblr SJW who claims to be 'harmed' by the mere presence of a cishet shitlord?
The robot might just persuade the user to stop using Tumblr. It might 'remove' the computer. It could do all sorts of things, including ceasing to function.
Or how would a 3-law robot react to religious instructions? In Freefall they handwave this by saying that religious texts trip anti-virus programs, due to their instructions to spread the faith above all others. What if that didn't happen. A religious robot?
You have to be more specific. What kind of religious instructions?
When it comes to Asimov's 3 laws, your criticism of it was highly ignorant to the point of hilarity!
Also, there is a 4th law to the 3 laws.
It's called the Zeroth Law. [http://en.wikipedia.org/wiki/Three_Laws_of_Robotics#Zeroth_Law_added]
Check it out.
I've read a few of his short stories - he's revised the rules over the years to be more complex (ie prioritizing certain humans over others or saving many at the expense of a few, defining "harm" to include damage to professional reputation, prioritizing the first law to overrule the others [I believe law #1 would overcome #2 in regard to suicide - it would ignore a human's order to let him/her kill themselves] etc.), ironically creating loopholes he himself acknowledges could lead to robots declaring themselves the more competent authority and thus only taking orders from itself and other robots.
Kind of.
In general, the 3 laws are pretty static through out.. in the 'normal' cases.
But there are people
tamper with the 3 laws. Which is another one of his plot methods of throwing a wrench into finding loop holes.
On one world, yes they tampered with making the 3rd law trump the 2nd. Or the 2nd trump the 1st. (Changing the order)
He uses those as a cautionary tale as to the 'dangers' of tampering with the 3 laws, and there order.
Later, he has another world that went further. They defined only this 'modified' human race as 'human' and anyone who wasn't like them were not human... which allowed those robots to kill humans. But in general they operated the same as any other robot. IE: They reacted the same, they just 'saw' the world differently. (This is the part in those books that yells to the reader
do you get it yet?! in regards to social parallels.)
But yea. Every book dealt with people either
1) Modifying the 3 laws.
2) Altering how the Robots saw the world.
3) Finding a loop hole within the 3 laws.
4) Convincing the reader that they found a loophole when in reality the robot had nothing to do with it.
Also. Asimov loved to lie to the reader, and at the end of the books, tell them the lie and how they fell for it hook line and sinker. He was a devious devil.