Poll: Robots. Free will, Compliance and the Three Laws

Recommended Videos

Adam Jenson

New member
Dec 23, 2008
879
0
0
Despite the futuristic trapping robots or automatons have been around for centuries albiet nowhere near as advanced as anything like the Honda Sapian or Qrio and essential clockwork novelties. However in an science fiction film or book the robot, in all its forms and guises has either been gifted a personality and free willed or a servant for its human masters. What I wish to ask you my fellow escapists is which robot do you thinkcould be the safest or most logical.

I would also like to remind you all of the three laws of robotics as devised and written by accliamed science fiction writer Isaac Asimov in his Foundation Series.

i. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

ii. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.

iii. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

while I realised that the laws have changed or modified since conception, I have chosen to use the original laws for both simplisities sake. In essence the laws are the perfect circle of protection.

The second question I ask you all. Could a robot with a personality and free will follow the laws and if so is that really free will?
 

RetiarySword

New member
Apr 27, 2008
1,377
0
0
Note: There are two posts of this.

They should be able to have free will, to a certain point/ Also those laws ARE just fiction, so at the moment nothing is stopping them from killing us. But Personality is a must, just to talk to robots!

To sum up this post, they should be able to think by themselfs, and they must comply to human command. Like data in star trek!
 

Nivag the Owl

Owl of Hyper-Intelligence
Oct 29, 2008
2,615
0
41
Aww come on people, Compliance only. They're robots. They will NEVER genuinly think for themselves and what ever way you look at it, unless we get to the point where we install actual brains into robots, they don't have emotions or feelings. Just the illusion that they do. They are just coding. They are not living things.
 

o_0coxy0_o

New member
Mar 4, 2009
24
0
0
Personality and Compliance; Then I can talk to it, and still boss it around, like a little brother or sister. =D
 

Zersy

New member
Nov 11, 2008
3,021
0
0
Adam Jenson said:
Despite the futuristic trapping robots or automatons have been around for centuries albiet nowhere near as advanced as anything like the Honda Sapian or Qrio and essential clockwork novelties. However in an science fiction film or book the robot, in all its forms and guises has either been gifted a personality and free willed or a servant for its human masters. What I wish to ask you my fellow escapists is which robot do you thinkcould be the safest or most logical.

I would also like to remind you all of the three laws of robotics as devised and written by accliamed science fiction writer Isaac Asimov in his Foundation Series.

i. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

ii. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.

iii. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

while I realised that the laws have changed or modified since conception, I have chosen to use the original laws for both simplisities sake. In essence the laws are the perfect circle of protection.

The second question I ask you all. Could a robot with a personality and free will follow the laws and if so is that really free will?
First of all GO ISAAC ASIMOV !! (He wrote the most aweosme books)

now to the main point

having a robot with free will and personality will be a problem

if you read i,Robot from Isaac Asimov you will know why

although for those of you who don't read the book i'll explain

the robot refused to beleive what the humans belived

it believed it was superior to humans
causing ALOT of problems

so i think that robots should be kept only as servants
 

nekolux

New member
Apr 7, 2008
327
0
0
I say free will and personality. To quote cameron " We're not made to be cruel " even without these laws i predict that the most civil and law abiding citizens would be the robots. ( Look at the amount of jerks around ) Perfectly logical cyborgs will not go around destroying people.

Also... i aspire to built a robot girlfriend one day. Fully sentient and everything... YOU CAN'T STOP OUR LOVE!!! =(

Edit: I also think that we should have a class system for robots. We have the citizens. Who have free will and personality adding to society peacefully. We have the drones, who do the menial tasks, mining, manufacturing etc. We'll also have the law enforcement ones. Now obviously we have to be very careful with the drones and the law enforcement robots. They will be programmed without free will or personality. A certain ability to learn will be implemented however the point is that they cannot override they're factory programming whereas a fully free citizen robot will be able to.

Also no one told you to give them hydraulic actuators that can apply 5 tons of force you know.
 

Souplex

Souplex Killsplosion Awesomegasm
Jul 29, 2008
10,312
0
0
I would like a mix of the three. They have free will buy my orders take priority over that free will. They should all make sarcastic comments and have the voice of a depressed British child.
 

Shadow Law

New member
Feb 16, 2009
632
0
0
Personality and complience, because it would be like having a buddy hanging around and you don't have to worry about him stabbing you in the back with a switchblade.
 

Valiance

New member
Jan 14, 2009
3,823
0
0
I said Free Will and Personality.

Show me the robot that doesn't beat me at a game of chess, but instead creates its own table-top game.

I will admire how far we've come.
 

Inverse Skies

New member
Feb 3, 2009
3,630
0
0
The idea of AI becoming self aware and able to learn from its mistakes has always been a popular theme in science fiction...

'When humanity developed a machine which could think and learn from its mistakes, they signed the death warrant for mankind' - Dune series.

Asimov's story I Robot with its three (incredibly clever) laws took a different approach in the sense that the robots did not declare war against humanity perse, but took over them silently and without humanity even realising. One bit which is interesting is when mining robots have the first law changed to prevent them running in to save humans in high radiation zones, it is changed to 'A robot may not harm a human being'. Meaning that say a robot could drop a heavy load on top of a human, but through their inaction to catch said heavy load allow the human to come to harm, but the robot could argue that the intention was not to harm the human etc.

Personally whilst this makes for very good science fiction, I find it highly unlikely that anything like this will ever happen in real life or in the future. A self-aware, learning AI is akin to creating life itself, especially as any organism aware of its own existence would logically take steps to preserve such existence. I really only see robots as more helpers to our everyday lives, limited to their original programming because creating intelligence perse would be an achievment which I very much doubt even humanity can achieve.
 

Lullabye

New member
Oct 23, 2008
4,425
0
0
I think humanity will eventually give way to ai. Why do i think this? because we are human. 1000 years ago meteorites and going to the moon were the last thing on peoples mind, in fact they dismissed it as gods or magic or etc.... I can't prove that we will but the one thing i'm absolutely sure of is that humanity will bring it's own destruction.
 

Flour

New member
Mar 20, 2008
1,868
0
0
The safest would be what we have now. Robots that do only one thing and nothing outside of their programming.

As for the second question. I don't believe in free will, only the illusion of it. If a robot is learned that any outcome that results in damage to something or someone is "undesired" then it will never do such a thing and it would pass that outcome on to the next generation of AIs.
If you mean the laws to be like human laws, then it wouldn't go against any programming(or free will if you prefer that) for an intelligent being. You break the law, you get punished, and the next generation(if they create their own upgrades) will no longer break that law.
Basically, robots that can learn will be better humans than humans will ever be, and if I die while our robotic servants take over the world, I will die happy as long as "damage to nature" is an undesired outcome.

Also, those laws are flawed. If robots decided that the best way to prevent harm to a human would be to lock every human in a room, nothing could be done against it.(quite sure there are movies made about this)

Note: I probably forgot the original question somewhere in this post, but I don't care.

EDIT: spelling errors
 

Alex_P

All I really do is threadcrap
Mar 27, 2008
2,712
0
0
A parable about AI morality: Sorting Pebbles Into Correct Heaps [http://www.overcomingbias.com/2008/08/pebblesorting-p.html].

-- Alex
 

Silver

New member
Jun 17, 2008
1,142
0
0
Nivag said:
Aww come on people, Compliance only. They're robots. They will NEVER genuinly think for themselves and what ever way you look at it, unless we get to the point where we install actual brains into robots, they don't have emotions or feelings. Just the illusion that they do. They are just coding. They are not living things.
Today, yes. Just like we are just animals, that became something more. Robots could become more as well. And emotions are just chemical reactions, electrical impulses. Chemical reactions that we can reproduce through the use of herbs, pills, whatever. It's quite possible we'd be able to produce such effects in a computer far sooner than we'd be able to write a functional AI.

However, that's so far into the future it's hardly worth considering. I don't see the need for robots to have free will. Not the robots we have today. They'd have no use for it. In a few thousand years when we have "real AI"? Who knows? Who knows what shape that AI will take, who knows what it's uses will be, it's input and output? There are so many things to factor in wether some laws like that should be made (and I don't think Assimovs laws would be the best, seeing as he himself had to correct them, several times over), or if complete free will would be. You have to take into consideration so much, like how humans have evolved when it happens, how many robots there are, how they work, if we even have robots (we could always have robots and AI separate, there's nothing that says the best thing to do would be to combine them, except for the fantasies of people today).
 

Alex_P

All I really do is threadcrap
Mar 27, 2008
2,712
0
0
Nivag said:
Aww come on people, Compliance only. They're robots. They will NEVER genuinly think for themselves and what ever way you look at it, unless we get to the point where we install actual brains into robots, they don't have emotions or feelings. Just the illusion that they do. They are just coding. They are not living things.
The stuff the human brain is made out of has no essential intelligence-containing quality that can't be replicated. It's the interconnectedness and organization of the brain that allow us to think. With enough effort, that is something we can probably reproduce with hardware and software. The essentialist line that an embodied artificial mind can't "truly" think is damn weak.

Just because you have an intelligent thing in your pocket doesn't mean you have to treat it like a human being, though.

-- Alex
 

Aethonic

New member
Jan 10, 2008
21
0
0
If they have free will, then they have a choice not to follow the laws.

I don't really see much point in making robots with personalities or free will beyond novelty value. Or creepy fetish value. Should they ever be made, they would be just really advanced computers hooked up to mechanical limbs. If computers could actually think for themselves, well, they call it the singularity for a reason.
 

Nivag the Owl

Owl of Hyper-Intelligence
Oct 29, 2008
2,615
0
41
Silver said:
Nivag said:
Aww come on people, Compliance only. They're robots. They will NEVER genuinly think for themselves and what ever way you look at it, unless we get to the point where we install actual brains into robots, they don't have emotions or feelings. Just the illusion that they do. They are just coding. They are not living things.
Today, yes. Just like we are just animals, that became something more. Robots could become more as well. And emotions are just chemical reactions, electrical impulses. Chemical reactions that we can reproduce through the use of herbs, pills, whatever. It's quite possible we'd be able to produce such effects in a computer far sooner than we'd be able to write a functional AI.

However, that's so far into the future it's hardly worth considering. I don't see the need for robots to have free will. Not the robots we have today. They'd have no use for it. In a few thousand years when we have "real AI"? Who knows? Who knows what shape that AI will take, who knows what it's uses will be, it's input and output? There are so many things to factor in wether some laws like that should be made (and I don't think Assimovs laws would be the best, seeing as he himself had to correct them, several times over), or if complete free will would be. You have to take into consideration so much, like how humans have evolved when it happens, how many robots there are, how they work, if we even have robots (we could always have robots and AI separate, there's nothing that says the best thing to do would be to combine them, except for the fantasies of people today).
Alex_P said:
Nivag said:
Aww come on people, Compliance only. They're robots. They will NEVER genuinly think for themselves and what ever way you look at it, unless we get to the point where we install actual brains into robots, they don't have emotions or feelings. Just the illusion that they do. They are just coding. They are not living things.
The stuff the human brain is made out of has no essential intelligence-containing quality that can't be replicated. It's the interconnectedness and organization of the brain that allow us to think. With enough effort, that is something we can probably reproduce with hardware and software. The essentialist line that an embodied artificial mind can't "truly" think is damn weak.

Just because you have an intelligent thing in your pocket doesn't mean you have to treat it like a human being, though.

-- Alex
In response to you both. I compeltely disagree. There is no way that a robot's reaction or personality can be truely random. They can be random to the extent where one of many reactions can be initiated, but they will never be able to just generate new reactions and this is one of the reasons I feel they lack potential to be equals. The other thing, is that they can't possibly have the ability to physically feel anything be it pain or emotion. It will be able to act as if it feels happy or if its had it's heart broken but it in reality, its fake.

They will never have the ability to have free will. Anything that a robot could ever logically and possibly do, was programmed by a human. It can't do anything for itself.
 

Alex_P

All I really do is threadcrap
Mar 27, 2008
2,712
0
0
Nivag said:
They will never have the ability to have free will. Anything that a robot could ever logically and possibly do, was programmed by a human. It can't do anything for itself.
I take it you've never heard of machine learning or artificial neural networks.

-- Alex
 

wordsmith

TF2 Group Admin
May 1, 2008
2,029
0
0
Adam Jenson said:
i. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

ii. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.

iii. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
I can immediately see one flaw in this. " A robot may not injure a human being or, through inaction, allow a human being to come to harm." So you walk into a bar, order a beer and a packet of pork scratchings (fried pork fat- perfect wing-man for the good old british pint)

Mr Roboto comes up to you and says "I'm sorry, I can't let you do that. That beer contains alchohol, and that pork fat contains fat and salt. For your own protection I must prevent you from consuming these".

Basically, what is "allowing a human to come to harm"? If they're about to be hit by a car or mugged, fair play. If they're "doing damage" to themselves by doing everyday chores, that's not so great.

I wouldn't give robots freedom for the same reason I wouldn't give a security guard the keys and security code to my house/safe etc. Yes, it's great whilst he's on your side, but if you are doing something that he doesn't agree with, you've now got to argue with a guy who's taller, more muscular, and trained to incapacitate people.
 

Alex_P

All I really do is threadcrap
Mar 27, 2008
2,712
0
0
wordsmith said:
Mr Roboto comes up to you and says "I'm sorry, I can't let you do that. That beer contains alchohol, and that pork fat contains fat and salt. For your own protection I must prevent you from consuming these".
That's kinda what Daneel, Asimov's #1 robot, actually ends up doing on a grand scale. With cultural manipulation and puppet governments and shit.

-- Alex