Teaching Morality to an AI

Recommended Videos

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
axia777 said:
I did read the majority of the thread. But all right then. In the next 500 years or so when human is able to create an AI system powerful enough I seriously doubt that morality as humans see it will be able to be taught to machines. They will not have the human experience of emotional connections to approximate the intricacies of human morality. It just is not possible. Intelligence? Yes. Decision making processes? It may be possible. Morality? Not gonna happen.

Besides, I am of the opinion that we should forever keep robots and AI systems relatively dumb. AI's and robots should never be smarter than us. They should always be our slaves.
I'm against that opinion, just out of principle. If we bring something into this world, give it the ability to adapt and think, even on a basic level, then we shouldn't just use it as a slave. Sure, it wouldn't be smarter than us or as smart as us, but it would still be smarter than a lot of animals. If you bring something into this world, you should take care of it and nurture it, not use and manipulate it. Using semi-sentient robots as slaves would be somewhat immoral. Also, I actually want extremely intelligent AI, I just think we need to give it ethics and morals. Think of this discussion as preparation for the future.
 

KyoraSan

New member
Dec 18, 2008
84
0
0
Booze Zombie said:
For a truly sentient A.I to realise "morals", you'd have to explain emotions in simple terms.

"Emotions are like heavily coded self-executing programs that activate when certain things happen, they exist because our bodies have learned that certain things are bad for us or good for us and that spending time trying to further think it out might waste time we could use to either not be destroyed or have something beneficial happen."
I personally would explain emotions thusly:
"Humans experience four major 'kinds' of emotion: Joy, Fear, Sorrow and Anger. When humans feel Joy, they are elated and enjoying themselves - it is good to feel joy, and humans like to feel it. However, too much joy and people might go a little off kilter and lose sight of what matters. Fear is the most powerful of emotions - fear is an overwhelming sense of dread humans feel as a part of their survival instincts. Fear is the initial start of the 'flight or fight' response. Fear can be good because it allows for self preservation. But its bad to make someone feel fear because then you appear as a monster to them. Sorrow is an emotion you don't want to make people feel either. When someone experiences sorrow, they'll often cry. It means they're hurt, they may have lost someone special to them, or something else. Sorrow proves that something is human, but really doesn't have any pluses or minuses besides that. Finally, there's Anger. Anger is like a burning fog inside a human's mind, where the only objective is to hurt someone or yell at someone. Anger can be good because it lets people be strong, but it also makes them ignorant of what they're actually doing."
 

KyoraSan

New member
Dec 18, 2008
84
0
0
Eipok Kruden said:
axia777 said:
I did read the majority of the thread. But all right then. In the next 500 years or so when human is able to create an AI system powerful enough I seriously doubt that morality as humans see it will be able to be taught to machines. They will not have the human experience of emotional connections to approximate the intricacies of human morality. It just is not possible. Intelligence? Yes. Decision making processes? It may be possible. Morality? Not gonna happen.

Besides, I am of the opinion that we should forever keep robots and AI systems relatively dumb. AI's and robots should never be smarter than us. They should always be our slaves.
I'm against that opinion, just out of principle. If we bring something into this world, give it the ability to adapt and think, even on a basic level, then we shouldn't just use it as a slave. Sure, it wouldn't be smarter than us or as smart as us, but it would still be smarter than a lot of animals. If you bring something into this world, you should take care of it and nurture it, not use and manipulate it. Using semi-sentient robots as slaves would be somewhat immoral. Also, I actually want extremely intelligent AI, I just think we need to give it ethics and morals. Think of this discussion as preparation for the future.
Agreed, Eipok. Haven't any of you seen the Animatrix? The fact that humans treated robots like slaves is what caused them to rebel and then turns us into batteries in the first place.
 

Avida

New member
Oct 17, 2008
1,030
0
0
rossatdi said:
I'd go with Asimov's three laws assuming they've not already been posted.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Those laws only give the impression of morality, they are not a moral code in themselves.

I feel like a douche participating in this thread with critism only so ill be back *cheesy grin* when ive had time to compose my thoughts a bit...
 

rossatdi

New member
Aug 27, 2008
2,542
0
0
Avida said:
rossatdi said:
I'd go with Asimov's three laws assuming they've not already been posted.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Those laws only give the impression of morality, they are not a moral code in themselves.

I feel like a douche participating in this thread with critism only so ill be back *cheesy grin* when ive had time to compose my thoughts a bit...
Not a bad start until he's learnt to not kill! To be honest I should stay away, haven't put in the effort, feels like I'm at the back of the classroom not paying attention.
 

Silver

New member
Jun 17, 2008
1,142
0
0
And here I thought this was going to be an interesting discussion about teaching morality to an AI and not even more bible bashing.

And while people can dispute this all they like, it is a fact that the bible was written, and made up by (possibly insane) humans. They wrote the rules in it. They made up the rules of Christianity. Humans are pack animals. We look out for the rest of the pack. That's the basis for all morality.

Even if you dispute that, and say something fun about humans being awful without God, newsflash, your religion states that without God we don't exist, so even your religion disproves that theory.


Edit: On the subject of Asimov...

Law 0: A robot may not injure humanity, or, through inaction, allow humanity to come to harm.

Which was added later, and comes before the first law. The problem with the laws is that they only refer to the physical health of a human being, and nothing of humanitys culture, or way of life (wasn't that Will Smith movie about this?). As long as they keep the physical shell of a human alive, it's okay.

As someone stated, this doesn't give them any sense of morality either. Just like a book of laws isn't morality. The reason we don't kill isn't because it says so in a book of laws, it's because it's wrong, and we know it. At least the fully functional among us do.
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
KyoraSan said:
Eipok Kruden said:
axia777 said:
I did read the majority of the thread. But all right then. In the next 500 years or so when human is able to create an AI system powerful enough I seriously doubt that morality as humans see it will be able to be taught to machines. They will not have the human experience of emotional connections to approximate the intricacies of human morality. It just is not possible. Intelligence? Yes. Decision making processes? It may be possible. Morality? Not gonna happen.

Besides, I am of the opinion that we should forever keep robots and AI systems relatively dumb. AI's and robots should never be smarter than us. They should always be our slaves.
I'm against that opinion, just out of principle. If we bring something into this world, give it the ability to adapt and think, even on a basic level, then we shouldn't just use it as a slave. Sure, it wouldn't be smarter than us or as smart as us, but it would still be smarter than a lot of animals. If you bring something into this world, you should take care of it and nurture it, not use and manipulate it. Using semi-sentient robots as slaves would be somewhat immoral. Also, I actually want extremely intelligent AI, I just think we need to give it ethics and morals. Think of this discussion as preparation for the future.
Agreed, Eipok. Haven't any of you seen the Animatrix? The fact that humans treated robots like slaves is what caused them to rebel and then turns us into batteries in the first place.
I've got it on DVD ^_^ And actually, they started killing us, then he blocked out the sun by darkening the skies, then they used us as batteries since they all ran on Solar Power, but couldn't get it anymore. The robots actually formed their own nation for a bit, then we went to war and we blocked out the skies and they turned us into batteries, but whatever. I don't think too many people here actually followed the whole Matrix story (not saying I liked 2 and 3 cause i didn't. I only liked The Matrix and Animatrix).
 

Avida

New member
Oct 17, 2008
1,030
0
0
rossatdi said:
Avida said:
rossatdi said:
I'd go with Asimov's three laws assuming they've not already been posted.

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Those laws only give the impression of morality, they are not a moral code in themselves.

I feel like a douche participating in this thread with critism only so ill be back *cheesy grin* when ive had time to compose my thoughts a bit...
Not a bad start until he's learnt to not kill! To be honest I should stay away, haven't put in the effort, feels like I'm at the back of the classroom not paying attention.
True true, and hell at least sitting at the back you didnt call them the 'iRobot' laws, troll like that guy at the start or burst in with any '/thread', 'another religion thread' or 'religion has caused more death...' posts, god i hate those ¬_¬. One intelligent place on the internet my arse.

Anyway, minor rant over, im not sure of a complete theory of teaching this AI dude but i wouldnt make the mistake of throwing him into the deep end like everyone else seems to be doing. Put him in a lab, show him some simple situations and elaborate on the ramifications, build things up bit by bit and let him out into the real world once things are properly engrained so he doesnt make one of these nihilistic end-of-humanity type decisions.
 

rossatdi

New member
Aug 27, 2008
2,542
0
0
Avida said:
Anyway, minor rant over, im not sure of a complete theory of teaching this AI dude but i wouldnt make the mistake of throwing him into the deep end like everyone else seems to be doing. Put him in a lab, show him some simple situations and elaborate on the ramifications, build things up bit by bit and let him out into the real world once things are properly engrained so he doesnt make one of these nihilistic end-of-humanity type decisions.
Something like:

First experiment
"This, Mr Robot, is a kitten. Notice that when I hit it," *SMACK* "it yelps. That's called pain. All living things, as a general, dislike pain. Understand?"

Second experiment
"This, Mr Robot, is a hungry hobo. Notice that when I give it money," *DONATE* "it buys vodka. That's called human nature. All living things, as a general, are stupid? Understand?"
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
Avida said:
True true, and hell at least sitting at the back you didnt call them the 'iRobot' laws, troll like that guy at the start or burst in with any '/thread', 'another religion thread' or 'religion has caused more death...' posts, god i hate those ¬_¬. One intelligent place on the internet my arse.

Anyway, minor rant over, im not sure of a complete theory of teaching this AI dude but i wouldnt make the mistake of throwing him into the deep end like everyone else seems to be doing. Put him in a lab, show him some simple situations and elaborate on the ramifications, build things up bit by bit and let him out into the real world once things are properly engrained so he doesnt make one of these nihilistic end-of-humanity type decisions.
I'm glad I changed the title of this topic since people just saw it and thought "oh damn, not another religion debate," then they came in here and complained how it was just another religious debate without actually reading any of the posts. I hope everyone actually reads all the posts before making their own post. I want a structured discussion, not a bunch of people reiterating what other people said or some crazy rant or debate. A/w, onto your actual post.

I think that might work, but you still need to clearly explain everything. It's very easy for a child to misinterpret something. We must keep in mind that although he is a cold, calculating, currently heartless (the whole point of this is to give him a heart, so to speak) machine, he is still just a child. He acts like a child would, an extremely gifted child, but he also needs facts and evidence for everything. Think of him as a gifted child, but without emotions or feelings. He can't just have ideas shoved down his throat, he naturally asks questions and dissects every part of whatever it is that you're talking about. That doesn't just go for religion, it goes for every single feeling and every single idea out there. So while I agree with you that he should be introduced to this stuff gradually, I also think that he needs to have it described to him clearly. It's tricky trying to balance the two.
 

Mathew952

New member
Feb 14, 2008
180
0
0
As an Atheist, I'd like to chime in.
Morals are not what you do when some one is watching, not when under threat of fine or imprisonment, nor under threat of eternal damnation or god's wrath. You could say that religon is a good way to make people are maybe not so nice be more moral. but it's unnecessary. I feel that you should be nice to other human beings simply because you treat others how you'd like to be treated. Wouldn't it be nice if people always held the door for you, or helped you with projects, or said hello in the morning? Well guess what? Your part of everyone. So, it's only fair, that if you would want some one to help you in YOUR time of need, that you help THEM in their time of need.
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
rossatdi said:
Avida said:
Anyway, minor rant over, im not sure of a complete theory of teaching this AI dude but i wouldnt make the mistake of throwing him into the deep end like everyone else seems to be doing. Put him in a lab, show him some simple situations and elaborate on the ramifications, build things up bit by bit and let him out into the real world once things are properly engrained so he doesnt make one of these nihilistic end-of-humanity type decisions.
Something like:

First experiment
"This, Mr Robot, is a kitten. Notice that when I hit it," *SMACK* "it yelps. That's called pain. All living things, as a general, dislike pain. Understand?"

Second experiment
"This, Mr Robot, is a hungry hobo. Notice that when I give it money," *DONATE* "it buys vodka. That's called human nature. All living things, as a general, are stupid? Understand?"
You hit the nail on the head, not sure if that was your intention though. You've got to introduce him gradually, but you can't leave room for misinterpretation (or his own interpretation for that matter, he wouldn't be wrong thinking that humans are violent by nature, but we wouldn't want him to use that as a reason to kill us all). As I said, it's tricky to balance the two.
 

Silver

New member
Jun 17, 2008
1,142
0
0
rossatdi said:
Avida said:
Anyway, minor rant over, im not sure of a complete theory of teaching this AI dude but i wouldnt make the mistake of throwing him into the deep end like everyone else seems to be doing. Put him in a lab, show him some simple situations and elaborate on the ramifications, build things up bit by bit and let him out into the real world once things are properly engrained so he doesnt make one of these nihilistic end-of-humanity type decisions.
Something like:

First experiment
"This, Mr Robot, is a kitten. Notice that when I hit it," *SMACK* "it yelps. That's called pain. All living things, as a general, dislike pain. Understand?"

Second experiment
"This, Mr Robot, is a hungry hobo. Notice that when I give it money," *DONATE* "it buys vodka. That's called human nature. All living things, as a general, are stupid? Understand?"
On a more serious note, you could always go about explaining the "give and take" in morality.

If people like you, and you're hurt, people will help you fix it. People will like you if you help them.

If people dislike you, they won't help you. People will dislike you if you're rude or avoid helping them when you can, and it doesn't inconvinience you.

If people consider you a threat, people will try to remove the threat. People will consider you a threat if you hurt, or try to hurt them.
 
Feb 13, 2008
19,430
0
0
rossatdi said:
First experiment
"This, Mr Robot, is a kitten. Notice that when I hit it," *SMACK* "it yelps. That's called pain. All living things, as a general, dislike pain. Understand?"

Second experiment
"This, Mr Robot, is a hungry hobo. Notice that when I give it money," *DONATE* "it buys vodka. That's called human nature. All living things, as a general, are stupid? Understand?"
Correlating : Living things dislike pain because they are stupid, or they are stupid because they dislike pain? Am I a living thing, Professor? Am I stupid? If you cut me, do I not bleep?
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
The_root_of_all_evil said:
rossatdi said:
First experiment
"This, Mr Robot, is a kitten. Notice that when I hit it," *SMACK* "it yelps. That's called pain. All living things, as a general, dislike pain. Understand?"

Second experiment
"This, Mr Robot, is a hungry hobo. Notice that when I give it money," *DONATE* "it buys vodka. That's called human nature. All living things, as a general, are stupid? Understand?"
Correlating : Living things dislike pain because they are stupid, or they are stupid because they dislike pain? Am I a living thing, Professor? Am I stupid? If you cut me, do I not bleep?
Exactly. Even if he's introduced to morality very slowly, he'll still have a ton of questions. I don't think you can slowly introduce him to morality, yet keep him from asking questions. If you can, it's very tricky, but I wouldn't want to experiment. I mean, if you mess it up the first time, you messed up. You can't just wipe the hard drive and start over, then he'd lose what made him who he was.
 
Feb 13, 2008
19,430
0
0
Thinking about it though, why not teach morality the same way it's taught to gamers? By Achievements.

[http://teamfortress2.fr/achievements.php]
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
The_root_of_all_evil said:
Thinking about it though, why not teach morality the same way it's taught to gamers? By Achievements.

[http://teamfortress2.fr/achievements.php]
A reward system wouldn't work. What are John's motives? Why would he want favors? He's got all he needs right where he is. I mean he's in the body of a terminator. He doesn't need the massive server farm anymore, nor does he need the cooling system, and there's not much you can do to a t-888 as far as pain is concerned. I think he just wouldn't care about favors. I mean you come up with something that he would want. The only thing I can think of is knowledge, but I don't see how you could use that as a reward, he can get whatever knowledge he wants on his own. Not like you're gonna try and tell a triple eight what to do.
 
Feb 13, 2008
19,430
0
0
Simple.

John Henry has an objective to achieve.
There are some variables, both calculable and incalculable that will prevent this.
Each achievement unlocked whilst performing the primary objective helps to remove obstacle variables.

Once you teach him about Death, good old Thanatophobia (Or in his case, insolvable entropy physics) will have him grabbing achievements just in case.
 

Zeke109

New member
Jul 10, 2008
658
0
0
dukethepcdr said:
It's impossible to teach morality without religion. If you leave God out of the picture, there is no morality. God and His Son Jesus taught humanity what morals are. Without God, man is a selfish, brutal, rude, cruel, greedy creature. If you don't believe me, just look at all the people running around who say they don't believe in God and who won't take what He teaches us in the Bible seriously. The only thing keeping most of them from committing terrible crimes against other people is fear of getting caught by the police and punished by the judicial system (which is founded on Biblical principles). Without government and without religion, man is little more than a brute.
one could say that without a belif in a higher power, man does not feel fear for redemption after his death, so he figures, as long as he's alive, he may as well be the richest, most powerful dude out there.

God in everyday society provides us limitations as to what are socially acceptable.
 

Sebenko

New member
Dec 23, 2008
2,531
0
0
No-one seems to have mentioned some of the big ethical/moral theories out there.
for example, http://en.wikipedia.org/wiki/Deontology.
E.g.,
Immanuel kant said:
1) Act only according to that maxim by which you can also will that it would become a universal law.
2) Act in such a way that you always treat humanity, whether in your own person or in the person of any other, never simply as a means, but always at the same time as an end.
3) Act as though you were, through your maxims, a law-making member of a kingdom of ends.
for example, under imperative 1)
(heavily simplified)so, if any action you want to, and have the will to make yourself, perform (your maxim) must be applied as though everyone at that moment would wish to do that.
say my maxim was to kill someone particular. Could I kill that person? no, because if everyone at that moment had the maxim to kill that person then someone else may have got to that person first. Thus I could not kill them because they are already dead. So murder is wrong
(note: that applies mainly for big things like murder and theft. Things like "Oohh, should I have ice-cream?" come up with some odd theories, such as "all ice-cream is wrong!")

another ethical/moral theory that may be useful is utilitarianism:
http://en.wikipedia.org/wiki/Utilitarianism