Have we Broken the 3 Laws of robotics?

Recommended Videos
Aug 25, 2009
4,611
0
0
Assuming that Isaac Asimov's three laws of robotics intended for AI are the best way to go when designing robots anyway.

The three laws of robotics aren't like some kind of universal truth, they were rules that a science fiction author made up. Should we also complain that we haven't yet managed to break the speed of light barrier yet? Because there's a whole host of sf that tells us we should be able to do that by now.

Besides, the three laws of robotics are a bit odd I think. A robot must not harm a human by action or inaction, well what about when harming one human would save ten? In that instance I'd really hope that the robot would harm the one human.

Also, we don't even have artificial intelligence yet, so let's not get ahead of ourselves.
 

WrongSprite

Resident Morrowind Fanboy
Aug 10, 2008
4,503
0
0
WanderingFool said:
WrongSprite said:
You know the 3 laws are fictional right? From I-Robot?

Robots are gonna do whatever the hell we want them to, and seeing as we're human, killing is pretty high on the list.
Way before I-Robot, but since I-Robot was suppose to be an adaptation of one of Asimov's books...

Anyways, I seriously doubt it.
Uh...Asimov's book was called I, Robot, it's what I was referring to. Check your facts.
 

oreopizza47

New member
May 2, 2010
578
0
0
Personally, I believe that no matter how many steps we take to a true AI, we are stepping in the wrong direction. I myself require a healthy dose of paranoia to make it through the day, and with all the possible outcomes for an AI going rogue, there's plenty of reason for that paranoia. It's alright to make the robots smarter than us, just as long as we stop trying to make them think. When they think, they'll realize that we aren't worth keeping around. And as long as we have total control, they can't break the laws unless it's by our command.

Food for thought.
 

Steve Fidler

New member
Feb 20, 2010
109
0
0
Seaf The Troll said:
I have been wondering on this for a While.

we have made missiles that are Computer operated to hit a target

Built Guns that work with cameras.

so have we already Over stepped the rule. We have Programmers working all the time making AI's to kill the players of games and adapting tactics. (if you piss of your AI you are fighting against and it fights back by hitting the real you with a missile)


These are the 3 Laws.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


I Would like to see your reasoning to this.
Those laws are fictional; but they do not apply to Robotics, they apply to AI. We do not have anything that can be classified as true AI yet.
 

WrongSprite

Resident Morrowind Fanboy
Aug 10, 2008
4,503
0
0
Serris said:
WrongSprite said:
You know the 3 laws are fictional right? From I-Robot?

Robots are gonna do whatever the hell we want them to, and seeing as we're human, killing is pretty high on the list.
...

no really?

they're not fictional from I-Robot, they were made by Isaac Asimov.if you don't know who that is, you should go out to the library more
I'm sick of people telling me this in this thread. ISAAC ASIMOV WROTE I, ROBOT.

How about you go to a fucking library and read it.
 

Subbies

New member
Dec 11, 2010
296
0
0
WrongSprite said:
You know the 3 laws are fictional right? From I-Robot?
Seaf The Troll said:
WrongSprite said:
You know the 3 laws are fictional right? From I-Robot?

Robots are gonna do whatever the hell we want them to, and seeing as we're human, killing is pretty high on the list.
it's from Isaac Asimov :p
I-Robot being one of his books
 

DEAD34345

New member
Aug 18, 2010
1,929
0
0
The person who created those 3 laws, Isaac Asimov, created them in a series of books that were all about how terrible those laws are, and why they wouldn't work. Those laws are broken to begin with, and were only created by Asimov in order to create a bunch of interesting plots in which they go horribly wrong, pretty much every time.
 

Wereduck

New member
Jun 17, 2010
383
0
0
The "3 Laws of Robotics" aren't laws in the scientific sense at all. It's such a misnomer I have trouble believing Asimov wrote it in english. Can anybody imagine coding such abstract instructions as the primary, secondary and tertiary imperatives of a real computer's operating system?
My point is this; for the 3 laws to work at all you would have to be installing them to an AI. They simply wouldn't make sense to the kind of computers we actually have today so no, we haven't violated the 3 laws.
To say nothing about the moral repugnance of actually inflicting them on a sentient machine if we could. That's a rant for another day.
 

DocBalance

New member
Nov 9, 2009
751
0
0
AccursedTheory said:
True AI, as I put it (Again, I am not referring to the science of AI, as their definitions are looser then a 5 dollar hooker), will be able to create knew was to interpret data.

Current 'AI' cannot do this.

And no, from my perspective (Again, this is my opinion), these devices are NOT a step towards AI, as AI should be able to redefine its own parameters, not just regurgitate numbers based on algorithms not of its own design (The Human mind does both, by the way).
By that logic, "True" AI will never exist under any sort of laws or limits, correct? If it is capable of re-defining it's parameters, then there are no laws or restrictions that can be placed on it, including any basic concept of morality. That may not seem bad at first, but when you have a man of metal tearing down your street with no concept of the problems with death, theft, and property damage, the triumph of creating pure intelligence suddenly looks just a little less important.
 

DefunctTheory

Not So Defunct Now
Mar 30, 2010
6,438
0
0
TheMaddestHatter said:
AccursedTheory said:
True AI, as I put it (Again, I am not referring to the science of AI, as their definitions are looser then a 5 dollar hooker), will be able to create knew was to interpret data.

Current 'AI' cannot do this.

And no, from my perspective (Again, this is my opinion), these devices are NOT a step towards AI, as AI should be able to redefine its own parameters, not just regurgitate numbers based on algorithms not of its own design (The Human mind does both, by the way).
By that logic, "True" AI will never exist under any sort of laws or limits, correct? If it is capable of re-defining it's parameters, then there are no laws or restrictions that can be placed on it, including any basic concept of morality. That may not seem bad at first, but when you have a man of metal tearing down your street with no concept of the problems with death, theft, and property damage, the triumph of creating pure intelligence suddenly looks just a little less important.
Welcome to the SkyNet scenario.

EDIT: To answer your question, you COULD have a limit on this type of AI. You could have limits and permanent instructions hard wired into the hard ware of the computer. If, say, 'Do not destroy mailboxes' was defined in an actual piece of hardware, rather then the software, it would theoretically prohibit the AI from doing so, while still allowing it to expand in every other aspect. As long as the computer never became 'self-aware' (Capable of analyzing its own internals, much like how the human brain cannot look upon itself), it would be incapable of creating a work around to the hard code.
 

DefunctTheory

Not So Defunct Now
Mar 30, 2010
6,438
0
0
dathwampeer said:
AccursedTheory said:
dathwampeer said:
What you explained it doing is essentially what we do.

How do you think general knowledge works for humans?

They store information from various sources and access it when we need to.

Our method of recall may be more convoluted and our storage capacity is not as refined. But we essentially do the same thing.

The only difference between a human and Watson in terms of intellect, would be level of self-awareness.

If you were to combine something like that with complex image recognition, like Natal, and slightly more advanced learning software.

Like the robot in the video that got posted to you earlier. The Honda robot.

Than I'd argue you'd have AI in the truest sense of the word. Or atleast a budding form of it.

I don't think Watson is the pinnacle or even as you put it, true AI. But it's certainly a milestone for it.

It's not just able to retain information. It can recall it for situational use when it's interacted with.

That's certainly a step in the right direction.
True AI, as I put it (Again, I am not referring to the science of AI, as their definitions are looser then a 5 dollar hooker), will be able to create knew was to interpret data.

Current 'AI' cannot do this.

And no, from my perspective (Again, this is my opinion), these devices are NOT a step towards AI, as AI should be able to redefine its own parameters, not just regurgitate numbers based on algorithms not of its own design (The Human mind does both, by the way).
That's just an indicator of complexity.

Defining their own parameters will come in time.

Don't run before you can walk ect.
Its two completely different processes that have nothing to do with each other in terms of development.
 

katsumoto03

New member
Feb 24, 2010
1,673
0
0
Catchy Slogan said:
AccursedTheory said:
You are under the assumption that AI exist.

You are wrong.
AI does exist, it's just extremely early stages atm.


OT: I think the 3 laws only really concerns AI, and the weapons of today are simply machines and good computing.
Not really a true AI if it's not self-aware.

As for the OT: We don't have AIs these days. If we did, we'd be dead already. Fact.
 

Lawnmooer

New member
Apr 15, 2009
826
0
0
Seaf The Troll said:
we have made missiles that are Computer operated to hit a target
They arn't thinking "I'm gonna asplode on that human" in fact they don't think, they just regulate flight and make sure they fly to where people programmed them to fly to.

Built Guns that work with cameras.
Again they don't think, they just allow humans to see and fire without putting themselves at harm (Unless you mean a different kind of gun camera)

We have Programmers working all the time making AI's to kill the players of games and adapting tactics.
They make AI respond to what the player does ingame, they do not set out to harm the player of the game just to harm the avatar of the player ingame (Basically what the computer plays as in the game - So they are being programmed to kill other AI's aswell aslong as we don't run around with a code on us marking us as the enemy)

Until we build a fully functioning self sufficient and self aware robot with his own AI that does not need help to learn or do things and then harms a human or disobeys an order we won't even be close to breaking any of the laws.
 

inFAMOUSCowZ

New member
Jul 12, 2010
1,586
0
0
I do wonder if we ever could get AI. But reasonable AI, like say in Halo. They AI aren't batshit insane, don't want to kill anyone, and you can hold a nice conversation with them.
 

maninahat

New member
Nov 8, 2007
4,397
0
0
Seaf The Troll said:
I have been wondering on this for a While.

we have made missiles that are Computer operated to hit a target

Built Guns that work with cameras.

so have we already Over stepped the rule. We have Programmers working all the time making AI's to kill the players of games and adapting tactics. (if you piss of your AI you are fighting against and it fights back by hitting the real you with a missile)


These are the 3 Laws.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


I Would like to see your reasoning to this.
Because those laws are works of fiction, and not legal binding international policy?