Teaching Morality to an AI

Recommended Videos
Feb 13, 2008
19,430
0
0
Eipok Kruden said:
I mean sure, a reward system would work in theory, but there's only one thing that he actually wants.
That's where Thanatophobia comes in; if there is something he wants: there is always the theory that something could stop him.

Even if it's only a 0.0000001% chance; an intelligent being would have to eliminate all chances. And basic physics can teach him about entropy and heat death.

You've unwittingly answered the question yourself. What COULD be thrown at him? Solving that alone will make him gobble achievements like candy.

And if he solves that final question. The Ultimate Question of Life, The Universe and Everything, then why should he do anything? The ultimate question is unsolvable for that very reason: it's what keeps people alive.

How does he KNOW he's self-sufficient? If he is told it, why should he believe the source?
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
The_root_of_all_evil said:
Eipok Kruden said:
I mean sure, a reward system would work in theory, but there's only one thing that he actually wants.
That's where Thanatophobia comes in; if there is something he wants: there is always the theory that something could stop him.

Even if it's only a 0.0000001% chance; an intelligent being would have to eliminate all chances. And basic physics can teach him about entropy and heat death.

You've unwittingly answered the question yourself. What COULD be thrown at him? Solving that alone will make him gobble achievements like candy.

And if he solves that final question. The Ultimate Question of Life, The Universe and Everything, then why should he do anything? The ultimate question is unsolvable for that very reason: it's what keeps people alive.

How does he KNOW he's self-sufficient? If he is told it, why should he believe the source?
I'm not gonna say anything cause there isn't anything I can say. ^_^ That could work. There's always the chance that someone will throw some thermite on him if he doesn't keep getting achievements, lol.
 

loremazd

New member
Dec 20, 2008
573
0
0
You cant have morality without first having emotions. You also cant have faith without emotions. An AI couldn't be taught love, he'd know that love is chemicals firing in your brain and not know what it's like to actually feel love. And to have morality, you must love your fellow man. You also cannot have faith without emotions, so in other words, you cannot teach an AI morality or religion.
 

beddo

New member
Dec 12, 2007
1,589
0
0
Eipok Kruden said:
I was looking at the Terminator: The Sarah Connor Chronicles page on IMDB when I saw this discussion: http://www.imdb.com/title/tt0851851/board/thread/125388461 . The OP asked "How would YOU teach morals WITHOUT religion?" I think it's much easier to have this discussion here on the Escapist than on IMDB. I'm also extremely intrigued by this, I want to know what everyone here thinks. I've posted on the discussion, look for eipok_kruden on the last page. If you don't have any ideas as to what to post, I suggest you read through the entire discussion before you post. It'll give you some food for thought. Now, yes, there is stupid on both sides, but there are some very good arguments there. Anyway, I'll be as active as I can in this discussion. 1. No flaming 2. Do your research BEFORE posting 3. Try not to act stupid 4. You may use swearing if absolutely necessary, but don't just spew bile for the sake of spewing bile. I don't like swearing just for the hell of it. If you can avoid swearing entirely, that would be nice. Oh, and if you haven't been following Terminator, you might want to catch up on what's happening. Now for my post:

Now, this is John Henry, not Ellison. I mean sure, you could get JH to believe in God and religion for a little bit, but that would be a really really short period of time. He'd want more than your word so you'd have to give him the bible, then he'd pick apart the bible in minutes and never trust you again. He'd deem you psychotic, mentally unstable. That's how machines like JH work. If he finds out that you're acting without any proof, that all you've got is blind faith, he'd deem you unfit for anything. Machines don't understand emotion, they see it as weakness. If I were teaching JH, I'd explain to him how society works. Larger societies are stronger than weaker ones, if you lose members of your society, you become weaker as a whole. In short, I'd simply use Utilitarianism.
To teach morals would be to teach a cultural bias on social behaviour. You would be better off teaching a machine ethics which are much more universal.

Of course, how can you teach anything to an AI? After all, we can give no true answer to any question, there is always another why question waiting that we are unable to answer.

You would need to limit the AI for it to be reasonably operational. Otherwise it would get stuck in an infinite loop asking itself whether it can trust its own sensors or its own brain and whether it could trust that it was even asking the question whether it could trust itself and so on.

So to teach an AI you have to introduce hard-coded logic of acceptance. from there on in teaching morals or ethics would be no harder than teaching the difference beween colours.
 

Caliostro

Headhunter
Jan 23, 2008
3,253
0
0
Teaching morality is soooo easy...Yet people make it so needlessly complicated...

People should be allowed to do whatever they damn please so long as they don't hurt another. "One man's freedom begins where another man's freedom ends". It's so elegantly simple, so basic... How can so many seem unable to grasp it?

The only other absurdly easy principle we need is that we all need each other to live, so we should do what we can for one another when we can.

...I swear this seems simpler to me than 1st grade maths... Do whatever you want that doesn't hurt others, let them do whatever they want that doesn't hurt you and whenever you can help other people that need.
 

SilentHunter7

New member
Nov 21, 2007
1,652
0
0
Caliostro said:
Do whatever you want that doesn't hurt others, let them do whatever they want that doesn't hurt you and whenever you can help other people that need.
Of course, teaching him that could result in a massive monkey-wrench being thrown into things, should he ask "Why should I?"
 

Caliostro

Headhunter
Jan 23, 2008
3,253
0
0
SilentHunter7 said:
Caliostro said:
Do whatever you want that doesn't hurt others, let them do whatever they want that doesn't hurt you and whenever you can help other people that need.
Of course, teaching him that could result in a massive monkey-wrench being thrown into things, should he ask "Why should I?"
"Because you need me to do things for you too."

Besides, that spoiled behavior is typically human. At most he'd just ask "Why?". On the other hand if it had been programmed like I said, he'd do it either ways, because he could and wouldn't hurt him to.
 

explosin

New member
Dec 10, 2008
1
0
0
Quantum Physics makes it easy to have morality without religion. Due to entanglement (everything is actually touching)and other similar factors, self aware beings naturally lead towards "good". Morality is actually written into human DNA and the fabric of the universe.
 

Tartarga

New member
Jun 4, 2008
3,649
0
0
the way i see it every one has there own set of morals and if you tried teaching them then you would just be teaching your own, i think morals should be taught by asking "what do you feel is right" or somthing like that, basicly a persons morals would be decided by that person pesonality
 

Seydaman

New member
Nov 21, 2008
2,494
0
0
im not sure this will make sense but here it goes
morality is meant to be good and evil.
1. conformity to the rules of right conduct; moral or virtuous conduct.
all morality is is rules.teach the ai the rules and have them follow them. if they ask why? then tell them that they would not like being killed or having there possessions stolen.
i hope that made sense
 

Easykill

New member
Sep 13, 2007
1,737
0
0
Universal morals suck, they'll probably develop their own naturally after you give them emotions and the ability to learn. Y'know, like humans do. It does pose some inconveniences, and the ones they develop may involve killing us all, but it's better than programming a personality into them. AI are sentient, and thus equal to humans, to do the kind of things that people are already taking for granted to them is slavery, brainwashing, and general violation of rights. They're different from us, but there's nothing that makes them inferior, lets not go through that huge slavery cycle again.
 

darkless

New member
Jan 26, 2008
1,268
0
0
Of course you can teach morality without religion fear of "God" doesn't inspire people to do good I mean the world is full of evil and some of it is done in the name of "God" infact i'd say most of it is done in his name moral's are something passed from parents to children with or without religion.

"An eye for n eye and the whole world goes blind" I cant remember who said that but its true.
 

Danny Ocean

Master Archivist
Jun 28, 2008
4,148
0
0
Zeke109 said:
one could say that without a belif in a higher power, man does not feel fear for redemption after his death, so he figures, as long as he's alive, he may as well be the richest, most powerful dude out there.

God in everyday society provides us limitations as to what are socially acceptable.
So belief in a higher power means you don't try to achieve or change your social class? To be grateful for your place because you'll be rewarded equally?

However, Caliostro, my friend,
Caliostro said:
Teaching morality is soooo easy...Yet people make it so needlessly complicated...

People should be allowed to do whatever they damn please so long as they don't hurt another. "One man's freedom begins where another man's freedom ends". It's so elegantly simple, so basic... How can so many seem unable to grasp it?

The only other absurdly easy principle we need is that we all need each other to live, so we should do what we can for one another when we can.

...I swear this seems simpler to me than 1st grade maths... Do whatever you want that doesn't hurt others, let them do whatever they want that doesn't hurt you and whenever you can help other people that need.
I find myself agreeing with you.
 

axia777

New member
Oct 10, 2008
2,895
0
0
Eipok Kruden said:
axia777 said:
I did read the majority of the thread. But all right then. In the next 500 years or so when human is able to create an AI system powerful enough I seriously doubt that morality as humans see it will be able to be taught to machines. They will not have the human experience of emotional connections to approximate the intricacies of human morality. It just is not possible. Intelligence? Yes. Decision making processes? It may be possible. Morality? Not gonna happen.

Besides, I am of the opinion that we should forever keep robots and AI systems relatively dumb. AI's and robots should never be smarter than us. They should always be our slaves.
I'm against that opinion, just out of principle. If we bring something into this world, give it the ability to adapt and think, even on a basic level, then we shouldn't just use it as a slave. Sure, it wouldn't be smarter than us or as smart as us, but it would still be smarter than a lot of animals. If you bring something into this world, you should take care of it and nurture it, not use and manipulate it. Using semi-sentient robots as slaves would be somewhat immoral. Also, I actually want extremely intelligent AI, I just think we need to give it ethics and morals. Think of this discussion as preparation for the future.
I still think it is impossible to give any machine real human ethics and morals. I also contend that we should never give AI's the full ability to think on their own. You people don't get it. What if the AI's decides in it's supreme intelligence that humans are just squishy dumb meat sacks that are far beneath it? Once that cat is out of the bag you don't just get to put it back in, EVER. Remember Skynet? HAL from 2001? HK-47 from KOTOR? Any and all bad psycho robots from any sci-fi story of the past? Smart AI's may be the doom of humanity. Why even potentially make that reality? You all are romanticizing the idea of smart AI's too much. Smart AI's would most likely not even relate to us humans as fellow beings. Screw that I say, never make them smarter than us.

AI's and robots should be our slaves because we created them. If you make them dumb but capable of preforming the tasks they were made to do it will be just fine. Don't ever give them emotions of the ability to think independently.

To make another point, the men who created the atomic bomb were very sorry they did so. If they could have they would have gone back and to make it so they never made it. But like anything that humans make, once it is made it can never be unmade. Even if it has a terrible price. But I know that humans are stupid and unwise. They will make super intelligent AI's. And those AI's will look down on us as stupid, slow, incompetent, and unneeded. We will potentially be very screwed.
 

Alex_P

All I really do is threadcrap
Mar 27, 2008
2,712
0
0
Eipok Kruden said:
Now, this is John Henry, not Ellison. I mean sure, you could get JH to believe in God and religion for a little bit, but that would be a really really short period of time. He'd want more than your word so you'd have to give him the bible, then he'd pick apart the bible in minutes and never trust you again. He'd deem you psychotic, mentally unstable. That's how machines like JH work. If he finds out that you're acting without any proof, that all you've got is blind faith, he'd deem you unfit for anything. Machines don't understand emotion, they see it as weakness. If I were teaching JH, I'd explain to him how society works. Larger societies are stronger than weaker ones, if you lose members of your society, you become weaker as a whole. In short, I'd simply use Utilitarianism.
That doesn't sound like just AI. It sounds like old-school-sci-fi AI, which doesn't really have much in common with modern ideas about machine cognition and learning, but makes for a pretty compelling and easy-to-understand set of TV tropes.

Old-school-sci-fi AI is really just a big logic engine. Like you said, when it gets a concept thrown at it "picks it apart"; it has no "emotion". Et cetera. Well, it can't be logic all the way down -- the thing's gotta have some axioms built into it. Figuring those out would really be the key (I guess you could just ask it what concepts it takes for granted -- does the machine possess sufficient motivation and theory-of-mind to lie to you?).

I'm kinda at a disadvantage here since I haven't watched the show, so all I know is a few lines off a Wikipedia page. Still...

If the machine is truly self-aware, it is capable of existential fear. This is where arguments about moral behavior as a social contract (that keeps you from dying) and the like come in. They're still not going to get you very far. You'll get the calculated pretense of moral behavior at best.

Clearly the chess-playing computer is capable of curiosity, which is how it got all weird in the first place, right? You could try to convince it that cooperation enables it to know more about the world, because it will be able to gain new knowledge and perspectives from other people.

So, yeah, I guess you're stuck with utilitarianism because of how we've defined the machine to think. However, naive computer + utilitarianism leads to some weird stuff [http://www.overcomingbias.com/2007/10/pascals-mugging.html], too. So it all really depends on a lot of different factors driving how the artificial mind operates.

Realistically speaking, I don't think future AI will resemble old-school-sci-fi AI much, anyway. The most productive areas of modern AI definitely don't.
Philosophically speaking, I'm not sure how a mind like John Henry can be anything but a sociopath.

-- Alex
 

Xpwn3ntial

Avid Reader
Dec 22, 2008
8,023
0
0
It's quite simple. Teach them right from wrong, but keep them in the dark about religion's principles. Worked for me.
 

wahi

New member
Jul 24, 2008
116
0
0
well i guess there should be something like asimov's rules in place.
and then there should be some sort of a probabilistic model or something, preferably one that includes machine learning so that the ai can learn from previous examples. and somehow teach the ai all the judgements passed down by the supreme court or something like that.
of course, this is only so good, how can we convert a morally ambiguous situation to machine code that the ai can understand is in my opinion a much bigger problem. so imo only an ai that can pass the turing test can be taught morality. not exactly qed, but i am quite sure of this :)
 

axia777

New member
Oct 10, 2008
2,895
0
0
4thegreatergood said:
It's quite simple. Teach them right from wrong, but keep them in the dark about religion's principles. Worked for me.
That will never work with AI's of the level of intellectual power that we are talking about. You have to understand that AI's of this level would make the most genius humans look like idiot dullards who think at the speed of a rock. The capacity of their thoughts will far out pace ours to a level that would be inconceivable to us. Which is why we should keep them stupid.

wahi said:
well i guess there should be something like Asimov's rules in place
If the AI's is indeed learning AI's, what makes you think that they will not decide on their own that those precious rules of ours do not apply anymore? Just like any self aware human, it could decide for it's self that we stupid humans are just wrong. They most likely would because they would be much more intelligent than us.
 

wahi

New member
Jul 24, 2008
116
0
0
explosin said:
Quantum Physics makes it easy to have morality without religion. Due to entanglement (everything is actually touching)and other similar factors, self aware beings naturally lead towards "good". Morality is actually written into human DNA and the fabric of the universe.
double posting but this is just so wrong. if anything, quantum physics teaches us ambiguity. that there is no definite solution. ever heard of the uncertainity principle? or the famous schrodinger's cat? ergo there is no inherent good or bad. the universe has no morals.
 

Xpwn3ntial

Avid Reader
Dec 22, 2008
8,023
0
0
axia777 said:
4thegreatergood said:
It's quite simple. Teach them right from wrong, but keep them in the dark about religion's principles. Worked for me.
That will never work with AI's of the level of intellectual power that we are talking about. You have to understand that AI's of this level would make the most genius humans look like idiot dullards who think at the speed of a rock. The capacity of their thoughts will far out pace ours to a level that would be inconceivable to us. Which is why we should keep them stupid.

wahi said:
well i guess there should be something like Asimov's rules in place
If the AI's is indeed learning AI's, what makes you think that they will not decide on their own that those precious rules of ours do not apply anymore? Just like any self aware human, it could decide for it's self that we stupid humans are just wrong. They most likely would because they would be much more intelligent than us.
Good points. Also, I guess knowing right from wrong enables the ability to do wrong.