Have we Broken the 3 Laws of robotics?

Recommended Videos

Paksenarrion

New member
Mar 13, 2009
2,911
0
0
You know what's more horrifying than a breakdown of the Three Laws of Robotics?

Those damned Second Foundationers! Who are they to control out destiny?!
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
19,538
4,128
118
AccursedTheory said:
EDIT: To answer your question, you COULD have a limit on this type of AI. You could have limits and permanent instructions hard wired into the hard ware of the computer. If, say, 'Do not destroy mailboxes' was defined in an actual piece of hardware, rather then the software, it would theoretically prohibit the AI from doing so, while still allowing it to expand in every other aspect. As long as the computer never became 'self-aware' (Capable of analyzing its own internals, much like how the human brain cannot look upon itself), it would be incapable of creating a work around to the hard code.
There's an easier way to limit it, simply to not install it into anything particularly dangerous. Robot rebellion stories only work if the AI is put in charge of important things, with no failsafes, preferably in charge of designing and building the next lot of robots.
 

rutger5000

New member
Oct 19, 2010
1,052
0
0
As long as a human doesn't push the button for the kill command, we're breaking the first law of robotics and have taken a very wrong turn.
The second and third only apply on self-conscious robots, and those don't exist. Personally I don't think they'll ever exist as long as their behavior is purely dictated by human programing.
As for those robots used
I don't know if the U.S. Military is using them or not. But I also heard of gun-equited spy-drones.
Apart from the laws of robotics. I think it's amongst the highest sins to kill humans with a machine without actual present. It isn't wrong for soldiers to kill each other because they are soldiers and they understand that: He who lives by the sword, shall die by the sword (or at least they should). A soldier isn't a murder as long as this principle is followed, however when it's abandoned then the guy pushing the button in a safe bunker has become a murderer. Therefor even
 

WanderingFool

New member
Apr 9, 2009
3,991
0
0
WrongSprite said:
WanderingFool said:
WrongSprite said:
You know the 3 laws are fictional right? From I-Robot?

Robots are gonna do whatever the hell we want them to, and seeing as we're human, killing is pretty high on the list.
Way before I-Robot, but since I-Robot was suppose to be an adaptation of one of Asimov's books...

Anyways, I seriously doubt it.
Uh...Asimov's book was called I, Robot, it's what I was referring to. Check your facts.
Damn, I stand corrected. My Mistake.
 

CrystalShadow

don't upset the insane catgirl
Apr 11, 2009
3,829
0
0
Seaf The Troll said:
I have been wondering on this for a While.

we have made missiles that are Computer operated to hit a target

Built Guns that work with cameras.

so have we already Over stepped the rule. We have Programmers working all the time making AI's to kill the players of games and adapting tactics. (if you piss of your AI you are fighting against and it fights back by hitting the real you with a missile)


These are the 3 Laws.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


I Would like to see your reasoning to this.
Ok, let's look at this for a moment:

1. None of the things you are talking about qualify as 'AI' in the sense that you're talking about. They don't have any capacity for independent thought.
Ironically, the AI in computer games might appear to come closest, but it's generally also a lot more stupid and limited than most people give it credit for.
Rip it out of the environment it was designed to work in, and it'll fail almost instantly, because it's not usually very flexible.

2. The laws of robotics are fictional, from a story written by Isaac Asimov

3. The whole point of the story they come from is the writer demonstrating that regardless of how clever these rules sound, they'd be completely useless in practice.
(The AI in the story obeys all 3 laws, but reinterprets them in such a way as thinking it has to protect humans from themselves.)


Don't take the laws of robotics so seriously. Asimov didn't.
 

Mikkaddo

Black Rose Knight
Jan 19, 2008
558
0
0
Seaf The Troll said:
I have been wondering on this for a While.

we have made missiles that are Computer operated to hit a target

Built Guns that work with cameras.

so have we already Over stepped the rule. We have Programmers working all the time making AI's to kill the players of games and adapting tactics. (if you piss of your AI you are fighting against and it fights back by hitting the real you with a missile)


These are the 3 Laws.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


I Would like to see your reasoning to this.
To my knowledge we have not made any such game where if yuo anger an AI it fires a missle at you, we do however had adaptive AI which changes it's tactics in the games to gbeat you. However, you'll notice that Asimov's 3 laws pertain to PHYSICAL harm, or mental technically. Now, an AI inside a game killing a digital representation of a human inside of a game, is not causing physical or mental harm of the sort that crosses the laws. However, the industry of AI Programing is making leaps and bounds for the technology itself. Robotics however is still very slowly progressing so we're not to Asimov's ideal Apococlypse yet.

The Matrix is still a bit off from us yet my friends, no worries. And honestly I don't see it happening in the next century. At least, not getting as far as the 3 laws being broken by a rogue AI. Then Again . . . GLaD0s is a distinct possibility.
 

similar.squirrel

New member
Mar 28, 2009
6,021
0
0
I don't think these laws apply in the absence of artificial intelligence. And I don't think they should be enforced/programmed when we finally manage create AI, either.
 

Dorian6

New member
Apr 3, 2009
711
0
0
the Laws of Robotics apply to a sentient AI, not human-operated machines.

A missile needs an engineer to tell it where to go. A gun needs an operator to point and pull the trigger.
 

Senaro

New member
Jan 5, 2008
554
0
0
There's a difference between an AI making these decisions on their own and having a machine programmed to do a specific action as decided by it's maker.
 

Flac00

New member
May 19, 2010
782
0
0
Seaf The Troll said:
snippity snip snip
no, since the AI don't actually react to anything but an avatar, they arn't real. If given the option to somehow kill a person, an AI would be too confused or lack the actual ability to do anything. So no, they don't break the first law, they don't break the second law as people can edit AI's and give them commands ala the command bar, and they don't break the final one, except for a few "suicidal" ai's by trying to just stay alive.
 

Lonan

New member
Dec 27, 2008
1,243
0
0
They are completely ridiculous. If artificial intelligence were created, the ability for a computer to learn, it would not adhere to any laws at all. The Geth logically determined that they must defend themselves from destruction, so they fought back. And as is obvious from my mentioning of the Geth, this is clearly all fictional.

You related fiction to reality. The reality of computer programming made to fire missiles to targets, and other military applications of computers. There is no artificial intelligence in this, and even if there was, see above. By it's definition AI is not bound by laws. Also, you are talking about robots, which have specific programming and not artificial intelligence.
 

ninjajoeman

New member
Mar 13, 2009
934
0
0
why exactly would robots want to kill us? I never really go that. killing an "inferior race" would use resources that are needed by AI.
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
19,538
4,128
118
rutger5000 said:
Apart from the laws of robotics. I think it's amongst the highest sins to kill humans with a machine without actual present. It isn't wrong for soldiers to kill each other because they are soldiers and they understand that: He who lives by the sword, shall die by the sword (or at least they should). A soldier isn't a murder as long as this principle is followed, however when it's abandoned then the guy pushing the button in a safe bunker has become a murderer. Therefor even
So, attacking a distant target from safety is inherently wrong? Artillery and bomber crews are murderers unless the enemy can fire back?

Surely wars are won by minimising the risk to your own forces, and maximising the damage done to the enemy?
 

rutger5000

New member
Oct 19, 2010
1,052
0
0
thaluikhain said:
rutger5000 said:
Apart from the laws of robotics. I think it's amongst the highest sins to kill humans with a machine without actual present. It isn't wrong for soldiers to kill each other because they are soldiers and they understand that: He who lives by the sword, shall die by the sword (or at least they should). A soldier isn't a murder as long as this principle is followed, however when it's abandoned then the guy pushing the button in a safe bunker has become a murderer. Therefor even
So, attacking a distant target from safety is inherently wrong? Artillery and bomber crews are murderers unless the enemy can fire back?

Surely wars are won by minimising the risk to your own forces, and maximising the damage done to the enemy?
Yeah for me that is very very wrong. Yeah for me that's murder. Yes it's also how most wars are won nowadays, but that doesn't make it right. And people shouldn't make it sound like it's anything else then bloody murder.
It's easy for me to say, but if I ever became a soldier I would much rather be killed by my enemy then use an unfair advantage to ensure my safety.
Who's right or wrong in wars will most often be forgotten, but who fought right or wrong will be remembered. Or at least that's how it works for me.
 

Hertzila

New member
Apr 5, 2010
18
0
0
We don't have anything that qualifies for those laws. No robot is actually capable of thinking for itself, there's always a human operator at some point giving the orders.
Also, I don't think we will ever adhere those laws. Military will most likely be the first guys who will have AIs that qualify and they will not listen to those laws. Neither do I think that AIs will ever be present in everyday life as helpers or such (true strong AIs that is, weak AIs are a possibility). Safety programming for preventing unintended shootings and friendly fire might count for something like these laws but I doubt it.

Why all this talk about good enough AIs being outside any limiting factors as if humans aren't? Morality isn't exactly as firm and solid as bedrock and laws made by nations don't inheretly limit us. Only thing we are really limited by is resources and that affects AIs too.

rutger5000 said:
thaluikhain said:
rutger5000 said:
Apart from the laws of robotics. I think it's amongst the highest sins to kill humans with a machine without actual present. It isn't wrong for soldiers to kill each other because they are soldiers and they understand that: He who lives by the sword, shall die by the sword (or at least they should). A soldier isn't a murder as long as this principle is followed, however when it's abandoned then the guy pushing the button in a safe bunker has become a murderer. Therefor even
So, attacking a distant target from safety is inherently wrong? Artillery and bomber crews are murderers unless the enemy can fire back?

Surely wars are won by minimising the risk to your own forces, and maximising the damage done to the enemy?
Yeah for me that is very very wrong. Yeah for me that's murder. Yes it's also how most wars are won nowadays, but that doesn't make it right. And people shouldn't make it sound like it's anything else then bloody murder.
It's easy for me to say, but if I ever became a soldier I would much rather be killed by my enemy then use an unfair advantage to ensure my safety.
Who's right or wrong in wars will most often be forgotten, but who fought right or wrong will be remembered. Or at least that's how it works for me.
It's murder any way you look at it, people disagreeing with each other on grand scales and announcing it doesn't make it anymore right or wrong. Whether it was done by another soldier using an assault rifle, sniper rifle, tank, knife, air strike or missile through a robot or not plays no factor in it.
 

rutger5000

New member
Oct 19, 2010
1,052
0
0
Hertzila said:
We don't have anything that qualifies for those laws. No robot is actually capable of thinking for itself, there's always a human operator at some point giving the orders.
Also, I don't think we will ever adhere those laws. Military will most likely be the first guys who will have AIs that qualify and they will not listen to those laws. Neither do I think that AIs will ever be present in everyday life as helpers or such (true strong AIs that is, weak AIs are a possibility). Safety programming for preventing unintended shootings and friendly fire might count for something like these laws but I doubt it.

Why all this talk about good enough AIs being outside any limiting factors as if humans aren't? Morality isn't exactly as firm and solid as bedrock and laws made by nations don't inheretly limit us. Only thing we are really limited by is resources and that affects AIs too.

rutger5000 said:
thaluikhain said:
rutger5000 said:
Apart from the laws of robotics. I think it's amongst the highest sins to kill humans with a machine without actual present. It isn't wrong for soldiers to kill each other because they are soldiers and they understand that: He who lives by the sword, shall die by the sword (or at least they should). A soldier isn't a murder as long as this principle is followed, however when it's abandoned then the guy pushing the button in a safe bunker has become a murderer. Therefor even
So, attacking a distant target from safety is inherently wrong? Artillery and bomber crews are murderers unless the enemy can fire back?

Surely wars are won by minimising the risk to your own forces, and maximising the damage done to the enemy?
Yeah for me that is very very wrong. Yeah for me that's murder. Yes it's also how most wars are won nowadays, but that doesn't make it right. And people shouldn't make it sound like it's anything else then bloody murder.
It's easy for me to say, but if I ever became a soldier I would much rather be killed by my enemy then use an unfair advantage to ensure my safety.
Who's right or wrong in wars will most often be forgotten, but who fought right or wrong will be remembered. Or at least that's how it works for me.
It's murder any way you look at it, people disagreeing with each other on grand scales and announcing it doesn't make it anymore right or wrong. Whether it was done by another soldier using an assault rifle, sniper rifle, tank, knife, air strike or missile through a robot or not plays no factor in it.
I don't think humans killing eachother is a pretty thing. But it's human nature, humans will always wage war against each other. If you accept war as part of human nature, then all that can be discussed is how a war should be fought