Teaching Morality to an AI

Recommended Videos

sidhe3141

New member
Jun 12, 2008
223
0
0
wahi said:
well i guess there should be something like asimov's rules in place.
and then there should be some sort of a probabilistic model or something, preferably one that includes machine learning so that the ai can learn from previous examples. and somehow teach the ai all the judgements passed down by the supreme court or something like that.
of course, this is only so good, how can we convert a morally ambiguous situation to machine code that the ai can understand is in my opinion a much bigger problem. so imo only an ai that can pass the turing test can be taught morality. not exactly qed, but i am quite sure of this :)
What kind of "previous examples" are we talking about? If you use the Supreme Court as an example, not only do you get an AI bound entirely by laws (which are really nothing more than majority prejudices), but you also risk it deciding that things like Dred Scott should count for more than things like Brown v. Board. Religion: same basic problems, regardless of which one you pick. No moral system is perfect, and very few are internally consistent.

I say that our hypothetical AI might not need to be taught morals, depending on how we go about creating it. Most of us are assuming that the AI is created by using some deep understanding of human psychology and creating a program to copy it. But I say that it would be more likely that the AI would be created by taking a pile of code, placing it in a virtual environment, and allowing it to self-replicate with modification. Something would eventually arise that would be sentient, and probably have the same (or at least similar) pack instincts to human beings, and the pack instinct is the foundation of virtually all morality and ethics.
 

NekoAnastasia

New member
Jan 16, 2009
101
0
0
In days gone by, they used to teach classes in school about how to be a good citizen. How to be polite, courteous, considerate and generous and help others, why these norms and values were important, in addition to law. Personally, I think that kind of lesson is sorely missed and would benefit kids in schools; I'm 21, and when I left high school at 16, kids in the younger years who were 11 or 12 years old were already very sexually active, would swear and curse at teachers, get into fights, bring pornography to class, etc.

Whilst I'm extremely liberal, something's definitely gone wrong when that behaviour is typical of kids before their teens, rather than college students who, at least, can legally have sex etc. I think we should ditch RE and bring back citizenship classes. RE doesn't teach anyone tolerance for other faiths as it is, it's 90% Christianity with occasional Judaism and Islam thrown in, and everyone else gets ignored. Perhaps laying an understanding of social values and why they're important to keeping society running smoothly, as well as law, is necessary?

Anyway, if I had to teach anyone morality without religion, I'd do so from the perspective of being a good, productive citizen who functioned within society and helped to further the common goals by working together and not being a douche.
 

Alex_P

All I really do is threadcrap
Mar 27, 2008
2,712
0
0
A parable about morality and AI [http://www.overcomingbias.com/2008/08/pebblesorting-p.html].

-- Alex
 

Raptoricus

New member
Jan 13, 2009
237
0
0
I personally think that religion would be a terrible way to teach an AI morals, because so much is open to interpretation. A true AI would be very much like a child in the way that it would learn things through trial and error, and what's going on in it's surroundings, of course the world is not a nice place in truth, people kill each other all the time, be it crime, wars, or self defence. But the general reasoning behind people killing people is usually jealousy or greed which an AI should really have none of (because it's a machine, why would it want anything that a person would want?) so the only reason that I could see it needing to resort to violence would be self defence. Now for the machine to defend itself firstly there would have to be a threat (obviously) secondly, it would have to have the will to survive (which I'm assuming that it would have). So there you have the problem, I suppose if it where ever to feel threatend there wouldn't be much that you could do to stop it from defending itself, but as a technically setient being, doesn't it have the right to defend itself?

I suppose where I'm trying to go with this is that it shouldn't NEED any form of morals, so long as it doesn't want anything, and I can't see it as seeing the whole human race as a threat to it's existence, as we wouldn't be truly threatening it (and anyone that did, pretty much deserves it). I'm not 100% on my argument though, if anyone thinks different please say :).
 

SamuraiAndPig

New member
Jun 9, 2008
88
0
0
The problem is that you'd be teaching it morals in the first place. Morals have the problem of having no conditions and extreme punishments.

For example, lets say your moralistic AI is a cop, and he sees someone stealing an apple at a supermarket. Now, any normal police officer (unless they're a NJ State Cop) would tell the offender to put it back, maybe give you a ticket and a misdemanor and you'd have to show up in front of a judge.

Now lets say Cop 2.0 sees you breaking into someone's house and stealing thousands in cash. Same act, different scale. In the second situation you'd almost certainly be doing time, and the homeowner's lawyer would probably want a word with you.

If Cop 2.0 only understands morals, there can only be one punishment set toward the specific crime of stealing. Therefore, the person stealing the apple and the person stealing wads of cash would both get the most extreme punishment for extremely different crimes. One crime was commited on public property owned by a business and the value of the goods taken was very small, if not negligable. The second was commited in a private house and the amount of goods taken was life-changing. Anyone sane would look at that automatically determine that they are different crimes, but not to our cyborg cop. Stealing is stealing, period. There would be no punishment that fits the specific crime.

What you need to do is teach Cop 2.0 *ethical* behavior, not *moral* behavior. There can be thousands of causes for stealing, but in a purely moralistic mode, only one effect. Ethics allows for there to be thousands of reactions to thousands of different acts, even if they are the same in spirit.

I would not want to be the guy who programs that.
 

Ibaapzo

New member
Dec 25, 2008
115
0
0
I believe that the Bible is in our lives to teach us morality, not religion. A guide more than a sect. From the stories, it gives us a guide to live by - treat others the way you wish to be treated, don't sleep with your buddy's wife, etc (roughly - if I was quoting, this post would be much longer...).

So, derived from the stories, is morality. Whether you want to use religion as a basis is your choice. I'm looking at the Bible as a guide for now.


You can teach morality without religion. A great (and extreme) example is atheist couples. Not every child grows up angry if raised by atheist couples. Not every child grows up happy is raised by religious parents. What's to say a robot is any different? America's structure of "morality" is based upon teachings. If you read Sesame Street books, they tell you the same reasoning of morality that the Bible does.


Raising a robot to know morality doesn't mean teaching it religion.

For another look:
"Deaths in the Bible. God - 2,270,365 not including the victims of Noah's flood, Sodom and Gomorrah, or the many plagues, famines, fiery serpents, etc because no specific numbers were given. Satan - 10."
---Unknown

"Who will say with confidence that sexual abuse is more permanently damaging to children than threatening them with the eternal and unquenchable fires of hell?"
---Richard Dawkins




The morals taught in the Bible are good. A supreme being isn't needed in order to implant morality.