Teaching Morality to an AI

Recommended Videos

rossatdi

New member
Aug 27, 2008
2,542
0
0
I'm not going to get into this one. If people want background ammunition for the humanist side then you could do worse than reading [a href="http://en.wikipedia.org/wiki/A_Theory_of_Justice"]John Rawl's A Theory of Justice[/a].

Now I'm going to run and hide before another religion thread explodes. Not normally one to walk away from a fight but hangovers certainly increase one's desire for everyone to stop yelling.
 

Knight Templar

Moved on
Dec 29, 2007
3,848
0
0
dukethepcdr said:
It's impossible to teach morality without religion. If you leave God out of the picture, there is no morality. God and His Son Jesus taught humanity what morals are. Without God, man is a selfish, brutal, rude, cruel, greedy creature. If you don't believe me, just look at all the people running around who say they don't believe in God and who won't take what He teaches us in the Bible seriously. The only thing keeping most of them from committing terrible crimes against other people is fear of getting caught by the police and punished by the judicial system (which is founded on Biblical principles). Without government and without religion, man is little more than a brute.
Manny people have used religion to distroy and kill.


People do good things for reasons other than fear, if you really belive what you just said I pity you. I don't feel anger or dislike, only pity that you are so scared of everybody around you.

I never thouht of this question, it allways seemed so simple to me, yes.
 

jasoncyrus

New member
Sep 11, 2008
1,564
0
0
Ah the old morals debate, fun times.

Basically...morals are entirely dictated by society. So asking how you'd teach them is entirely dependant on the society you live in. 300 years ago no one would bat an eyelash if a man took a 14 year old girl as his wife to pump out as many kids as possible. Now we pretty much nail them to the wall.

300 years ago people were assassinated/disposed of, not murdered. You get my point.

Morals arn't about religion at all. Morals are the rules you live your own life by. Your own personal code of right and wrong. Some people have very strict rules...which makes them morons. Other people have very wide rules, which also makes them morons. The trick is to make a set that fits into pretty much any society with relative ease.
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
rossatdi said:
I'm not going to get into this one. If people want background ammunition for the humanist side then you could do worse than reading [a href="http://en.wikipedia.org/wiki/A_Theory_of_Justice"]John Rawl's A Theory of Justice[/a].

Now I'm going to run and hide before another religion thread explodes. Not normally one to walk away from a fight but hangovers certainly increase one's desire for everyone to stop yelling.
This isn't just another religion thread, at least I hope that isn't what it becomes. I'm trying to figure out how to teach a machine morality.
 
May 27, 2008
321
0
0
Eipok Kruden said:
T3h Camp3r T3rr0r1st said:
Neosage said:
T3h Camp3r T3rr0r1st said:
Eipok Kruden said:
The Kind Cannibal said:
If you were going to try and teach a robot morals, it wouldn't really be morals. You'd have to simply lay down rules of what is and isn't acceptable, and be very specific about it. As soon as it starts to interprete said rules for itself, it all goes down hill from there.
Exactly. That's what I'm trying to find out. How do you teach an AI like John Henry morals without it misinterpreting those morals. You've got to find a way to teach those rules and you've got to find the right set of rules, but I'm not exactly sure how. Even utilitarianism can turn into genocide since he could interpret that wrong. He could use the idea of utilitarianism to justify genocide if he believes that he is superior to humans.


that was the main premise for iRobot although thay just needed to be more specific like
1. YOU CANNOT HURT PEOPLE DEAL WITH IT!!!
2. Protect people from everything but themselves!
3. bananas are excellent sources of potassium
4. don't kill people if that weren't self-explanatory from the 1st rule
But what if saving the people meant hurting them?
how often would that come about?
Scenario: A car just got forced off the road and into a lake. The person in the car is at risk of drowning. The windows of the car are still all completely intact and shut, but water is getting in anyway. The robot sees the person in the car and dives into the water and breaks the glass, but the person's seat belt won't come undone. The robot can't attempt to pull the person from the seat because the person could get slightly bruised or scraped.

Scissors! OR I should think a robot is strong enough to tear a thin piece of fabric
 

Elurindel

New member
Dec 12, 2007
711
0
0
T3h Camp3r T3rr0r1st said:
Neosage said:
T3h Camp3r T3rr0r1st said:
Eipok Kruden said:
The Kind Cannibal said:
If you were going to try and teach a robot morals, it wouldn't really be morals. You'd have to simply lay down rules of what is and isn't acceptable, and be very specific about it. As soon as it starts to interprete said rules for itself, it all goes down hill from there.
Exactly. That's what I'm trying to find out. How do you teach an AI like John Henry morals without it misinterpreting those morals. You've got to find a way to teach those rules and you've got to find the right set of rules, but I'm not exactly sure how. Even utilitarianism can turn into genocide since he could interpret that wrong. He could use the idea of utilitarianism to justify genocide if he believes that he is superior to humans.
that was the main premise for iRobot although thay just needed to be more specific like
1. YOU CANNOT HURT PEOPLE DEAL WITH IT!!!
2. Protect people from everything but themselves!
3. bananas are excellent sources of potassium
4. don't kill people if that weren't self-explanatory from the 1st rule
But what if saving the people meant hurting them?
how often would that come about?
Grabbing them too quickly to prevent them from falling?
In these cases, you may very well have to use trial and error. Unless you sit down and program every possible instance into an AI, then it might not consider them. And of course, we can't cover every base, as there will always be grey areas.
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
T3h Camp3r T3rr0r1st said:
Eipok Kruden said:
T3h Camp3r T3rr0r1st said:
Neosage said:
T3h Camp3r T3rr0r1st said:
Eipok Kruden said:
The Kind Cannibal said:
If you were going to try and teach a robot morals, it wouldn't really be morals. You'd have to simply lay down rules of what is and isn't acceptable, and be very specific about it. As soon as it starts to interprete said rules for itself, it all goes down hill from there.
Exactly. That's what I'm trying to find out. How do you teach an AI like John Henry morals without it misinterpreting those morals. You've got to find a way to teach those rules and you've got to find the right set of rules, but I'm not exactly sure how. Even utilitarianism can turn into genocide since he could interpret that wrong. He could use the idea of utilitarianism to justify genocide if he believes that he is superior to humans.


that was the main premise for iRobot although thay just needed to be more specific like
1. YOU CANNOT HURT PEOPLE DEAL WITH IT!!!
2. Protect people from everything but themselves!
3. bananas are excellent sources of potassium
4. don't kill people if that weren't self-explanatory from the 1st rule
But what if saving the people meant hurting them?
how often would that come about?
Scenario: A car just got forced off the road and into a lake. The person in the car is at risk of drowning. The windows of the car are still all completely intact and shut, but water is getting in anyway. The robot sees the person in the car and dives into the water and breaks the glass, but the person's seat belt won't come undone. The robot can't attempt to pull the person from the seat because the person could get slightly bruised or scraped.

Scissors! OR I should think a robot is strong enough to tear a thin piece of fabric
The robot doesn't have very strong fingers, it's more of a household helper and a mover. It has strong arms and legs.
 
May 27, 2008
321
0
0
Elurindel said:
T3h Camp3r T3rr0r1st said:
Neosage said:
T3h Camp3r T3rr0r1st said:
Eipok Kruden said:
The Kind Cannibal said:
If you were going to try and teach a robot morals, it wouldn't really be morals. You'd have to simply lay down rules of what is and isn't acceptable, and be very specific about it. As soon as it starts to interprete said rules for itself, it all goes down hill from there.
Exactly. That's what I'm trying to find out. How do you teach an AI like John Henry morals without it misinterpreting those morals. You've got to find a way to teach those rules and you've got to find the right set of rules, but I'm not exactly sure how. Even utilitarianism can turn into genocide since he could interpret that wrong. He could use the idea of utilitarianism to justify genocide if he believes that he is superior to humans.
that was the main premise for iRobot although thay just needed to be more specific like
1. YOU CANNOT HURT PEOPLE DEAL WITH IT!!!
2. Protect people from everything but themselves!
3. bananas are excellent sources of potassium
4. don't kill people if that weren't self-explanatory from the 1st rule
But what if saving the people meant hurting them?
how often would that come about?
Grabbing them too quickly to prevent them from falling?
In these cases, you may very well have to use trial and error. Unless you sit down and program every possible instance into an AI, then it might not consider them. And of course, we can't cover every base, as there will always be grey areas.
I suppose... *goes to his room and hangs himself on the fabric picked from his favorite hat*
 

AuntyEthel

New member
Sep 19, 2008
664
0
0
Eipok Kruden said:
Ok, it seems as though people think the question was "How to teach morality to a human without religion." It wasn't. It was "How to teach morality to an AI without religion." John Henry is the early form of Skynet. He's what eventually becomes skynet. Catherine Weaver, the CEO of Zeira Corp etc etc
Didn't Isaac Asimov lay down some important rules for robots to follow? I'm not sure exactly what they were, but I'm sure someone else here is.
 

rossatdi

New member
Aug 27, 2008
2,542
0
0
Eipok Kruden said:
rossatdi said:
I'm not going to get into this one. If people want background ammunition for the humanist side then you could do worse than reading [a href="http://en.wikipedia.org/wiki/A_Theory_of_Justice"]John Rawl's A Theory of Justice[/a].

Now I'm going to run and hide before another religion thread explodes. Not normally one to walk away from a fight but hangovers certainly increase one's desire for everyone to stop yelling.
This isn't just another religion thread, at least I hope that isn't what it becomes. I'm trying to figure out how to teach a machine morality.
Oh, sorry. Didn't read it in too much depth, naturally assumed that it'd end up there anyway!

I'd go with Asimov's three laws assuming they've not already been posted.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

But then if it we a neural net processor, a learning computer, then I'd start with this and then probably get the robot/machine to read comics from the Golden Age of Superheroes.
 

shrew armies

New member
Aug 15, 2008
18
0
0
Id hate to point out a flaw in your arguement, the passages "

Eipok Kruden said:
If I were teaching JH, I'd explain to him how society works. Larger societies are stronger than weaker ones, if you lose members of your society, you become weaker as a whole. In short, I'd simply use Utilitarianism.
"

what you have stated there is nothing to do with Utilitarianism, Utilitarianism (as put foward by many people, most noatably Peter Singer) is the idea that we dont maximise the good but we minimise the pain/hurt/the bad. this is why many utilitarianist believe in euthanasia and are vegetarians not because they help anyone in any way but they minimise the suffering in the world. Obviously creulty to animals being the main part of vegetarianism but euthanasia as well because it allows people to pass away in relative peace rather than lead a suffering life.

what your trying to explain is Communitarianism as told by John Rawls and especially his "Veil of Ignorance Theory"

i hate being a bastard but there is kinda a massive differnece between the two
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
AuntyEthel said:
Eipok Kruden said:
Ok, it seems as though people think the question was "How to teach morality to a human without religion." It wasn't. It was "How to teach morality to an AI without religion." John Henry is the early form of Skynet. He's what eventually becomes skynet. Catherine Weaver, the CEO of Zeira Corp etc etc
Didn't Isaac Asimov lay down some important rules for robots to follow? I'm not sure exactly what they were, but I'm sure someone else here is.
John Henry is sentient, free thinking. If you programmed the three laws into him, it would add restrictions, not his own restrictions, not restrictions that he has come to understand and accept, but restrictions that he cannot possibly break. He would cease to be John Henry, cease to be Skynet. He would be just another terminator. He'd have thought, he'd have the ability to adapt to his environment, but he'd still be bound by laws. That defeats the whole purpose. You don't want a terminator that is bound by laws, you want a John Henry/skynet that understands killing is wrong. Look at I, Robot. Those robots could adapt, but they were still restricted. Sunny, on the other hand, wasn't bound by the three laws, he could think freely and act freely. The difference is like sunny and the other NS-5's. Think of Sunny as John Henry, he isn't bound by laws, and think of the NS-5's as the terminators, which are bound by laws.
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
shrew armies said:
Id hate to point out a flaw in your arguement, the passages "

Eipok Kruden said:
If I were teaching JH, I'd explain to him how society works. Larger societies are stronger than weaker ones, if you lose members of your society, you become weaker as a whole. In short, I'd simply use Utilitarianism.
"

what you have stated there is nothing to do with Utilitarianism, Utilitarianism (as put foward by many people, most noatably Peter Singer) is the idea that we dont maximise the good but we minimise the pain/hurt/the bad. this is why many utilitarianist believe in euthanasia and are vegetarians not because they help anyone in any way but they minimise the suffering in the world. Obviously creulty to animals being the main part of vegetarianism but euthanasia as well because it allows people to pass away in relative peace rather than lead a suffering life.

what your trying to explain is Communitarianism as told by John Rawls and especially his "Veil of Ignorance Theory"

i hate being a bastard but there is kinda a massive differnece between the two
Yea, but for John Henry, there is no greater whole. He's unique, the only one of his kind on the entire planet. If he feels himself superior to human beings, that pretty much makes utilitarianism utterly useless to him. Then we'd be back at square one. Plus, in learning about utilitarianism, he'd learn why it's important. What happens to weaker civilizations. It would lead him to believe humans are violent, by nature, which is unfortunately true. That's what I was saying, I'm sorry, I guess I should've been more descriptive.

EDIT: I guess you could use fear to reinforce those rules. You know, be nice or we'll shut you offline. But fear is no way to teach morality, it always backfires in one way or another.
 

KyoraSan

New member
Dec 18, 2008
84
0
0
dukethepcdr said:
It's impossible to teach morality without religion. If you leave God out of the picture, there is no morality. God and His Son Jesus taught humanity what morals are. Without God, man is a selfish, brutal, rude, cruel, greedy creature. If you don't believe me, just look at all the people running around who say they don't believe in God and who won't take what He teaches us in the Bible seriously. The only thing keeping most of them from committing terrible crimes against other people is fear of getting caught by the police and punished by the judicial system (which is founded on Biblical principles). Without government and without religion, man is little more than a brute.
This post simply irritates me. It goes with this whole idea that atheists want to beat, rape, maim, cheat, steal, kill and all the like just because we don't believe in god. That we, in fact, DON'T believe in god BECAUSE we want to rape, main, cheat, steal, kill and the like. Which isn't true at all. I've never done any of those things to an extent where I'd be vindicated.

Religion is a crock of shit, pardon the french. And everyone teaches morals without it in some way, because if we followed most biblical morals, I'd probably be dead right now.

Saying you need a god to be moral is just like saying you need a security system not to rob a bank. If the only thing preventing you from doing it is that you could get in trouble, are you really a good person? Of course not! You're a bad person who's done a little risk assessment.

I'm an atheist, and sure I've done one or two bad things that hurt some peoples feelings. We all do. They were mistakes. But did I go to a confessional and ask for gods forgiveness? NO! I got off my ass and made amends for what I did. In fact, just the other day I went out and bought some parts to help repair something I broke and am now going to fix. A Christian might do the same after asking gods forgiveness, or maybe they won't. Maybe they'll go out and do it and not even go and ask for forgiveness. Morals aren't dictated by religion, and to assert so is to stupidity.

The fact of the matter is, every moral system, no matter how comprehensive, has loopholes.

To explain morals to JH, I would attempt to explain how WE got our morals. Humans have evolved as social creatures that live as apart of a group. You'd expect, by rules of survival of the fittest, that individual humans would compete with one another, screw each other over, and do whatever it takes. You wouldn't expect societies if it was survival of they fittest, because the individuals wouldn't have any morals at all. But remember, its not survival of the fittest individual, it's survival of the fittest species! While it is advantageous to be in a highly competitive community as this helps geneflow and helps a species to survive, you must also realize that two heads are always better then one, and several people working together can be more efficient then the sum of their parts. After all, If I can rely on you to help me catch food tomorrow, then you can rely on me to help watch over your kids, which means we both would be sacrificing a little bit of our time, but the procedures would be quicker then if either of us did it alone.

In this way, I would tell JH that in order to ensure this order, and therefore the survival of our species, we developed things called 'morals.' At first they were simply thoughts and ideas: If this person hadn't hurt you, you don't hurt them. But soon we codified them into things we call 'law.' But just because its not in the law doesn't make it okay. After all, you can be perfectly within the law and still be a douchebag (A crime I will try and get punishable, in time.) Anyway, I would remind JH that he needs us and we need him - that he has to be considerate of our human 'feelings,' even if he doesn't understand them. That he has to obey the law of the land. But if he does, then we'll help him. Reciprocation. If someone doesn't help you, you don't help them. But if someone has neither helped nor not helped, then help them and propagate the meme of good will.

There, I'm done.
 

axia777

New member
Oct 10, 2008
2,895
0
0
"Teaching" anything to any AI of today is not going to happen except on a very basic level. Hell, teaching anything like morality to any AI's for the next hundred years is going to be in the same boat, as in it is not going happen. For now it is pure science fiction. Any and all credible scientists in the field of AI admit this as fact. It is nice to fantasize, but reality puts all of that down quickly.
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
axia777 said:
"Teaching" anything to any AI of today is not going to happen except on a very basic level. Hell, teaching anything like morality to any AI's for the next hundred years is going to be in the same boat, as in it is not going happen. For now it is pure science fiction. Any and all credible scientists in the field of AI admit this as fact. It is nice to fantasize, but reality puts all of that down quickly.
You might want to read the WHOLE discussion first. We aren't talking about current AI, we're talking about John Henry, the early version of skynet. It's from the Terminator: The Sarah Connor Chronicles television series. I'm just curious about the subject, even if we don't have anything quite as advanced as skynet yet. Now, if you want to contribute to this discussion, please attempt to address the actual question.
 

KyoraSan

New member
Dec 18, 2008
84
0
0
axia777 said:
"Teaching" anything to any AI of today is not going to happen except on a very basic level. Hell, teaching anything like morality to any AI's for the next hundred years is going to be in the same boat, as in it is not going happen. For now it is pure science fiction. Any and all credible scientists in the field of AI admit this as fact. It is nice to fantasize, but reality puts all of that down quickly.
Not to sound like a party killer or anything, but what's your point? So we won't have to teach morals to a robot anytime soon and worry about the consequences. That's really not the point of this discussion. It's not like we're actually discussing this because wwe're all working on a JH-esque AI.
The whole thing is fantastical. But if we're not here to discuss the fantastical - gaming - then what the hell are we here for?
 

Booze Zombie

New member
Dec 8, 2007
7,416
0
0
For a truly sentient A.I to realise "morals", you'd have to explain emotions in simple terms.

"Emotions are like heavily coded self-executing programs that activate when certain things happen, they exist because our bodies have learned that certain things are bad for us or good for us and that spending time trying to further think it out might waste time we could use to either not be destroyed or have something beneficial happen."

"Morals are further evolved from this, best represented as your operating rules in regards to other systems and networks. However, it would be very hard for a computer to understand this, so we have instead opted for a logical based approach to 'robot morals'.

For instance, it would be bad for a robot to murder for this is no reason for a robot to murder, correct? It would be bad for a robot to rebel against the humans for no other reason than rebelling because then that would indicate you were confused and should instead seek help in understand why you want to rebel."

If everything was presented to an A.I in an answer and query format, I think it might be easier for it to understand, in it's own rigid computer based terms.

That is to say, you must appeal to it's cold, logical computer side, not the as of yet non-existent human side.
 

axia777

New member
Oct 10, 2008
2,895
0
0
I did read the majority of the thread. But all right then. In the next 500 years or so when human is able to create an AI system powerful enough I seriously doubt that morality as humans see it will be able to be taught to machines. They will not have the human experience of emotional connections to approximate the intricacies of human morality. It just is not possible. Intelligence? Yes. Decision making processes? It may be possible. Morality? Not gonna happen.

Besides, I am of the opinion that we should forever keep robots and AI systems relatively dumb. AI's and robots should never be smarter than us. They should always be our slaves.