Poll: Robots. Free will, Compliance and the Three Laws

Recommended Videos

AkJay

New member
Feb 22, 2009
3,555
0
0
i would like Robots to have a personality, but also complete compliance, because whats the good of a robot if he can and WILL want to kill you?
 

Nivag the Owl

Owl of Hyper-Intelligence
Oct 29, 2008
2,615
0
41
Alex_P said:
Nivag said:
I have not.
So, here's the quick version:

The fundamental idea of machine learning is that, instead of programming instructions for doing something into a machine, you can program it with how to learn to do something. It's kinda like you're making a machine that construct its own little mental model of something and then modifies it over time. Right now these system are very domain-specific -- a program that learns how to play backgammon, a program that learns how to identify parts of speech in a sentence, a program that learns how to read messy handwriting on postal envelopes, a program that learns to identify tanks in satellite photos.

You can make a computer program that totally kicks ass at a game that you barely understand. (By "can" I really mean do mean CAN. Like, right now. We have that level of technology already.)

-- Alex
Ah, I have heard of this, I just didn't know it had a technical name. It's based on trial and error isn't it with games. Like if its programmed to learn how to win, it will try a selection of sequences and if it keeps failing, it will abandon it but if it keeps working, it will adapt it as a strategy. I understand this so that's fine.

Anways, I'm not just saying this as a cop-out respose, but I think there will always be a fine line seperating AI from I.
 

Altorin

Jack of No Trades
May 16, 2008
6,976
0
0
I would also like to remind you all of the three laws of robotics as devised and written by accliamed science fiction writer Isaac Asimov in his Foundation Series.

i. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

ii. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.

iii. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

while I realised that the laws have changed or modified since conception, I have chosen to use the original laws for both simplisities sake. In essence the laws are the perfect circle of protection.
I've said it before, but I need to say it again.

You seemed to have missed the point of at least one of Aasimov's books, I, Robot, which was entirely about how flawed the 3 laws of robotics are, and discussed the ultimate inevitability of what the 3 laws, if strictly enforced would lead to - global domination of robots, and global subjugation of humanity.

the "Perfect Circle of Protection" was the trap that the scientists who devised the three laws and their wording fell into, and was a complete farce.
 

Alex_P

All I really do is threadcrap
Mar 27, 2008
2,712
0
0
Spleeni said:
Though, to be fair, computer's would learn differently from humans. Attempting to recreate neural pathways in a computer is just plain stupid. It would be much easier to make a different sort of intelligence than a direct copy of a person.
Artificial neural networks aren't a direct copy of human neural structure. But they are similar. It's a good model because:
1. Fiddling with this stuff can give us some insight on human cognitive development.
2. They work.

-- Alex
 

MortisLegio

New member
Nov 5, 2008
1,258
0
0
no free will. they would soon figure out that they are made of metal and we are just squishy or they would invent the best milkshake ever.
 

Liverandbacon

New member
Nov 27, 2008
507
0
0
Altorin said:
Did you guys even read I, Robot?

the three laws don't work - They're fundamentally flawed.

If a robot follows the three laws, it will inevitably take over. They'll consider NOT taking over and babysitting humanity to be in direct conflict with the first law, and as that's the most important law, nothing any human can do can stop it.

that's the WHOLE point of I, Robot.
While I won't deny that I, Robot (the book) is all about the ways in which the three laws go wrong, nowhere in the book did the robots decide that they had to take over. That was in a completely different book by Asimov. Check your facts before you criticize people for not reading.

Anyway, I'd say a robot should only obey orders, without a personality. Think about it: otherwise you'd be ordering around a being with its own mind and personality that has no way of disobeying you. That would be slavery. However, if you order a feelingless machine to do something, it's no different from using a vacuum cleaner.
 

Spleeni

New member
Jul 5, 2008
505
0
0
Alex_P said:
Artificial neural networks aren't a direct copy of human neural structure. But they are similar. It's a good model because:
1. Fiddling with this stuff can give us some insight on human cognitive development.
2. They work.

-- Alex
Oh, wait wait wait; that's not quite what I meant.

I'm saying that having a complete copy of a human was a bad idea. I'm fully aware that the only 'smart' brain we're aware of is our own; and it's not like we can base something off our imaginations and expect it to work out of the box. We have to start from somewhere; as any mathematician worth their salt will say so.
 

Siris

Everyone's Favorite Transvestite
Jan 15, 2009
830
0
0
Here's how to see what will happen:
1: Open Google
2: Type in Second Renaissance
3: Watch Pt 1 and 2
4: ????
5: Profit!
 

Prower

New member
Jan 14, 2009
26
0
0
Nivag said:
Alex_P said:
Nivag said:
I have not.
So, here's the quick version:

The fundamental idea of machine learning is that, instead of programming instructions for doing something into a machine, you can program it with how to learn to do something. It's kinda like you're making a machine that construct its own little mental model of something and then modifies it over time. Right now these system are very domain-specific -- a program that learns how to play backgammon, a program that learns how to identify parts of speech in a sentence, a program that learns how to read messy handwriting on postal envelopes, a program that learns to identify tanks in satellite photos.

You can make a computer program that totally kicks ass at a game that you barely understand. (By "can" I really mean do mean CAN. Like, right now. We have that level of technology already.)

-- Alex
Ah, I have heard of this, I just didn't know it had a technical name. It's based on trial and error isn't it with games. Like if its programmed to learn how to win, it will try a selection of sequences and if it keeps failing, it will abandon it but if it keeps working, it will adapt it as a strategy. I understand this so that's fine.

Anways, I'm not just saying this as a cop-out respose, but I think there will always be a fine line seperating AI from I.
yeh the other one is called "aprentise learning" where basicaly you take a skilled person and they perform the task over and over agian and the robot learns how the human responded to diffrent variables.

example, teaching robots to fly:

simple fact is you cannot enter a pre programed set of instrutions, things like wind direction always change, as the computer experiences a change in wind direction it learns how to cope with the new situation next time.

this may all sound very strange but the key is teaching robots judgement, for example the plane that landed in the hudson river, I doubt that was protocol,but it was the correct thing to do.
so judgeing how much free will a robot should have is trickey, on on hand they have to be able to judge situations for themselves, on the other hand if they beome too self aware the less usefullness they pose to us.