Teaching Morality to an AI

Recommended Videos

Alex_P

All I really do is threadcrap
Mar 27, 2008
2,712
0
0
Eipok,

Thanks for the summary. Quite helpful.

Highlights I'm using to form my opinion:
Eipok Kruden said:
Here it goes: There was this guy named Andy Goode who was building this chess playing computer.
...
He created a new computer and learned from his mistakes, making an AI that was a very aggressive learner. While the Turk 1 (he named the computer "The Turk") was like a normal teenager, the Turk 2 was like a highly gifted child. He entered the Turk 2 into a chess competition and it got into the finals, but lost against the other finalist, a japanese computer. It seemed too human in the way it played, taking risks and testing out the other computer. In the end, that's what made it lose.
...
The psychologist figured out why the Turk was showing all these pictures, he was making a joke. Here: http://www.youtube.com/watch?v=IOJWusFQGNQ&feature=related
...
The Turk (now named John Henry after Catherine named it) re-routed power from the air conditioning and air circulation systems to power its server farm and cooling system which left the psychologist to die a painful death in an overheated air tight room.
Hmm, okay, maybe I was too hasty in calling it old-school-sci-fi AI. It seems like the writers were going for something a bit less trope-ful.

A model of curiosity (think of it as allocating attention) is the centerpiece of general-purpose learning machines. John Henry is no exception. Remember when the psychologist says it's "bored"? Why, that means it's exhausted the learning possibilities of his little brain-in-a-box environment.

It seems like, along with that, John Henry has a very definite desire not only to learn through observation but to interact with its world. You said it's even a bit "aggressive" in its experimentation.

So, it's got these drives. Kinda like instincts or emotions. Narratively, they're byproducts of trying to make an AI that would teach itself chess, more or less. You can work with these to teach it stuff. Let's just hope that making an AI that wants to play (and win) chess didn't result in an AI that feels the desire to destroy or conquer.

So -- and I'm brushing the cobwebs off of my "optimist" hat here -- I'd say that John Henry has the potential to learn to functionally coexist with other beings. I think the little "joke" incident really represents a deep-seated need to communicate, which I think could grow into full-fledged empathy if someone can guide John Henry to attain some understanding of the human culture around it. It's got more in common with us than it seems: it's not human but it is living in a human world, thinking in terms of human language, and now it's in a humanoid body, too.

...

Just-for-fun side note:
Modern chess AIs are mostly just tricky optimized tree-search algorithms. Machine learning techniques are more helpful with other games, such as backgammon, which has moves constrained by a random die roll (TD-Gammon uses a neural network trained with temporal-difference learning; master players have learned new strategies by analyzing its play), or go, which has a very high branching factor (no truly masterful bots exist at the moment -- note that you're not allowed to enter supercomputers into go tournaments, the rules say that your software has to run on a single consumer-grade desktop when it plays).

-- Alex
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
Alex_P said:
Just-for-fun side note:
Modern chess AIs are mostly just tricky optimized tree-search algorithms. Machine learning techniques are more helpful with other games, such as backgammon, which has moves constrained by a random die roll (TD-Gammon uses a neural network trained with temporal-difference learning; master players have learned new strategies by analyzing its play), or go, which has a very high branching factor (no truly masterful bots exist at the moment -- note that you're not allowed to enter supercomputers into go tournaments, the rules say that your software has to run on a single consumer-grade desktop when it plays).

-- Alex
I find it odd that the creators of the show decided to make the Turk a chess AI since chess is much less complicated than go. It'd have been more interesting if the Turk had been in a go tournament instead. The Turk 2 was originally just a desktop, but it was networked with a server farm in Zeira corp once they aquired it. As for the human body: http://www.youtube.com/watch?v=ArG0_nbL6_Y and here's him playing chess with Ellison as Ellison attempts to use God to give John morals http://www.youtube.com/watch?v=B09lJa6Vv8w . I hope that doesn't actually work out in the show, I hope they have to find another way, a different method. I'd be pretty disappointed if religion worked in the show.

EDIT: And Catherine Weaver cause she's just amazing: http://www.youtube.com/watch?v=G5bQg8GT_XA and here's more Catherine Weaver http://www.youtube.com/watch?v=iEPUpbNnteo&NR=1
 
May 27, 2008
321
0
0
Hevoo said:
T3h Camp3r T3rr0r1st said:
Eggo said:
These documents might help:

http://en.wikipedia.org/wiki/United_States_Bill_of_Rights
http://en.wikipedia.org/wiki/United_States_Constitution
http://en.wikipedia.org/wiki/Universal_Declaration_of_Human_Rights
please note how two of those are AMERICAN and I personally would never follow that sort of thing!

back on topic though I would just chuck the law book in front of my kids and say learn these so you DON'T go to jail!
Why wouldn't you follow this system?

'CUS I'M A F^&*ing AUSSIE!!!!!


FEAR MY USE OF U IN COLOUR!
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
magicmonkeybars said:
how do you give "John Henry" morals ? you don't.
it already has them.
How does he already have them? Please explain. Cause I remember him murdering that psychologist and thinking nothing of it.
 

Typhusoid

New member
Nov 20, 2008
353
0
0
The way I see it, the only way you could teach an A.I morality is by suggesting that by breaking the law, it would bring the wrath of the law on them. However, if it became more efficient to break the law and damm the consequences it would do so in an instance. Not being truly alive or sentient the "Do unto others" would not compute
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
Typhusoid said:
The way I see it, the only way you could teach an A.I morality is by suggesting that by breaking the law, it would bring the wrath of the law on them. However, if it became more efficient to break the law and damm the consequences it would do so in an instance. Not being truly alive or sentient the "Do unto others" would not compute
I don't think it's so much the "not being alive part" as it is the "in the body of a T-888" part. And it is truly sentient. Read through as much of the discussion as you can and then voice your opinion. This thread has been dead for a few days and I'd like to revive it since I still haven't found a good answer to the question I posed.
 

fletch_talon

Elite Member
Nov 6, 2008
1,461
0
41
The problem is the only way to teach morals (with or without religion) is by providing consequences for immoral actions. So unless the AI feels pain or emotions, it can't understand the concept of consequences. And my understanding is that the only way you could possibly have AI feel these things is to program it to know that "punishment = bad" and if you're doing that, why not just skip that step and say "immoral = bad"

In other words it just seems like it would remove the whole point of it being an intelligence, because in the end, its still only responding to programming based on a stimulus. Kinda like how a computer puts a "K" on the screen when you press "K" because its programmed to do so.

But I'm not overly knowledgable on this subject so there's quite likely something I've missed.
 

Bluntknife

New member
Sep 8, 2008
372
0
0
dukethepcdr said:
It's impossible to teach morality without religion. If you leave God out of the picture, there is no morality. God and His Son Jesus taught humanity what morals are. Without God, man is a selfish, brutal, rude, cruel, greedy creature. If you don't believe me, just look at all the people running around who say they don't believe in God and who won't take what He teaches us in the Bible seriously. The only thing keeping most of them from committing terrible crimes against other people is fear of getting caught by the police and punished by the judicial system (which is founded on Biblical principles). Without government and without religion, man is little more than a brute.
Yes because Christians are always a moral compass.
God shows the way, and leads us through such joyous times as the Dark Ages.
Don't forget about the crusades either. Now that was a fun party!
Religon has solved every problem ever created by man hasn't it?
 

Typhusoid

New member
Nov 20, 2008
353
0
0
Eipok Kruden said:
Typhusoid said:
The way I see it, the only way you could teach an A.I morality is by suggesting that by breaking the law, it would bring the wrath of the law on them. However, if it became more efficient to break the law and damm the consequences it would do so in an instance. Not being truly alive or sentient the "Do unto others" would not compute
I don't think it's so much the "not being alive part" as it is the "in the body of a T-888" part. And it is truly sentient. Read through as much of the discussion as you can and then voice your opinion. This thread has been dead for a few days and I'd like to revive it since I still haven't found a good answer to the question I posed.
You have made a (very common) mistake. Many people think "sentience" is a preset charictaristic you can just apply to something. It is not. It is so variable based on things like species and priorities. It would be impossible to truly recreate this in a machine. An A.I's version of sentience would be entirely based on logic, thus proving my post
 

fletch_talon

Elite Member
Nov 6, 2008
1,461
0
41
Sorry I should have responded to that part of the post as well.
Without meaning to cause offense (but I know it will)

Religion is the stupidest way to teach morals.

Thats not saying it doesnt work, it is probably (funnily enough) the most effective. The problem however arises when you realise that religion doesn't promote empathy. Instead of telling you why something is wrong, religious lessons on morality such as that found in the bible seem more often than not to consist of "don't do this or bad stuff will happen."

Now I know earlier I was saying that providing consequences is the only way to teach morals, and I stand by that, but the problem is religion (not to say religious parents) don't carry on to the next step and explain why there are consequences.

I could be wrong of course, but all I hear from the Bible is do not kill, rather than do not kill because...
Hence the reason people like myself take issue with certain religious views, because there is no reason behind them, they simply think the way they do because they fear the consequences they've been told will accompany such actions.
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
Typhusoid said:
You have made a (very common) mistake. Many people think "sentience" is a preset charictaristic you can just apply to something. It is not. It is so variable based on things like species and priorities. It would be impossible to truly recreate this in a machine. An A.I's version of sentience would be entirely based on logic, thus proving my post
Whether or not his thoughts are entirely based on logic is still up for debate. It's very possible that his mind just works this way for the time being because logic is all it has. Nothing is really definite since we can't possible know exactly what John Henry is like. All we know is he's an AI, he's in the body of a T-888 (which could very easily influence his actions if he realizes how powerful he really is), and he wants to learn.
 

Typhusoid

New member
Nov 20, 2008
353
0
0
Eipok Kruden said:
Typhusoid said:
You have made a (very common) mistake. Many people think "sentience" is a preset charictaristic you can just apply to something. It is not. It is so variable based on things like species and priorities. It would be impossible to truly recreate this in a machine. An A.I's version of sentience would be entirely based on logic, thus proving my post
Whether or not his thoughts are entirely based on logic is still up for debate. It's very possible that his mind just works this way for the time being because logic is all it has. Nothing is really definite since we can't possible know exactly what John Henry is like. All we know is he's an AI, he's in the body of a T-888 (which could very easily influence his actions if he realizes how powerful he really is), and he wants to learn.
Ok, I think we're on different wavelengths here. I was thinking of and actual real A.I and what it might be like, while I beleive you are thinking of the Sarah Connor Chronicles
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
Typhusoid said:
Ok, I think we're on different wavelengths here. I was thinking of and actual real A.I and what it might be like, while I beleive you are thinking of the Sarah Connor Chronicles
Yea, that's exactly what I'm talking about. Do you watch the show or did you just read through the previous pages of this discussion? Either way, now you know. ^_^ So, halp?
 

Typhusoid

New member
Nov 20, 2008
353
0
0
Eipok Kruden said:
Typhusoid said:
Ok, I think we're on different wavelengths here. I was thinking of and actual real A.I and what it might be like, while I beleive you are thinking of the Sarah Connor Chronicles
Yea, that's exactly what I'm talking about. Do you watch the show or did you just read through the previous pages of this discussion? Either way, now you know. ^_^ So, halp?
I did read the prevoius pages, but I thought The Sarah Connor Chronicles was just being used as an example, not a basis for the whole argument. And no, I don't watch the show
 

Alex_P

All I really do is threadcrap
Mar 27, 2008
2,712
0
0
Typhusoid said:
An A.I's version of sentience would be entirely based on logic, thus proving my post
What leads you to believe that? Don't just say it's self-evident: a lot of what people think is "obvious" about A.I. is, well, wrong.

-- Alex
 

magicmonkeybars

Gullible Dolt
Nov 20, 2007
908
0
0
Eipok Kruden said:
magicmonkeybars said:
how do you give "John Henry" morals ? you don't.
it already has them.
How does he already have them? Please explain. Cause I remember him murdering that psychologist and thinking nothing of it.
easy John Henry has needs, it needs others to survive, it can get bored.
you mistake a powerstruggle for immorality thats all.
john henry realizes that he is the stronger faction and now asserts himself as such through violent means only because it has concluded that that is the only way to reach his human counter parts.

even though he can kill everyone later on as skynet he doesn't.
he just reduces the popluation because it both fears beings of equale inteligence (other AI)and so doesn't create them, the t1000 didn't have freewill, it had to follow it's programming.

it keeps people alive to "play" with them, it could just have created a virus that would have killed off everyone and put it in the water.
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
magicmonkeybars said:
Eipok Kruden said:
magicmonkeybars said:
how do you give "John Henry" morals ? you don't.
it already has them.
How does he already have them? Please explain. Cause I remember him murdering that psychologist and thinking nothing of it.
easy John Henry has needs, it needs others to survive, it can get bored.
you mistake a powerstruggle for immorality thats all.
john henry realizes that he is the stronger faction and now asserts himself as such through violent means only because it has concluded that that is the only way to reach his human counter parts.

even though he can kill everyone later on as skynet he doesn't.
he just reduces the popluation because it both fears beings of equale inteligence (other AI)and so doesn't create them, the t1000 didn't have freewill, it had to follow it's programming.

it keeps people alive to "play" with them, it could just have created a virus that would have killed off everyone and put it in the water.
Skynet doesn't keep people alive to play with them, it's doing everything it can to kill them. Don't you watch the movies? Skynet is doing its best so it doesn't get killed by the resistance. It gets scared and is eventually pressured into sending t-1001's back since the other terminators failed. They failed because they were restricted, they had rules programmed into them, they were limited. The t-1001's are the most advanced terminators because they have intelligence. They aren't restricted, they don't even have guidelines, they learn and adapt on the fly so that they may be more effective. Look at Catherine Weaver, she doesn't act like the t-1000 or the t-800's or the t-888's because she isn't like them. The existence of Catherine Weaver just goes to show how desperate Skynet has gotten. And besides, the John Henry in the show was just created. He hasn't fully developed, he's just figuring things out.
 

magicmonkeybars

Gullible Dolt
Nov 20, 2007
908
0
0
Eipok Kruden said:
magicmonkeybars said:
Eipok Kruden said:
magicmonkeybars said:
how do you give "John Henry" morals ? you don't.
it already has them.
How does he already have them? Please explain. Cause I remember him murdering that psychologist and thinking nothing of it.
easy John Henry has needs, it needs others to survive, it can get bored.
you mistake a powerstruggle for immorality thats all.
john henry realizes that he is the stronger faction and now asserts himself as such through violent means only because it has concluded that that is the only way to reach his human counter parts.

even though he can kill everyone later on as skynet he doesn't.
he just reduces the popluation because it both fears beings of equale inteligence (other AI)and so doesn't create them, the t1000 didn't have freewill, it had to follow it's programming.

it keeps people alive to "play" with them, it could just have created a virus that would have killed off everyone and put it in the water.
Skynet doesn't keep people alive to play with them, it's doing everything it can to kill them. Don't you watch the movies? Skynet is doing its best so it doesn't get killed by the resistance. It gets scared and is eventually pressured into sending t-1001's back since the other terminators failed. They failed because they were restricted, they had rules programmed into them, they were limited. The t-1001's are the most advanced terminators because they have intelligence. They aren't restricted, they don't even have guidelines, they learn and adapt on the fly so that they may be more effective. Look at Catherine Weaver, she doesn't act like the t-1000 or the t-800's or the t-888's because she isn't like them. The existence of Catherine Weaver just goes to show how desperate Skynet has gotten. And besides, the John Henry in the show was just created. He hasn't fully developed, he's just figuring things out.
I'd rather call Catherine Weaver off spring because of that.

I guess john henry's immorality is a failure of it's surroundings not a failure of its self.
it doesn't see the world in the same way as we do.
it's our own inablity to treat john henery as a sentient being that provoces it nothing more.
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
magicmonkeybars said:
I'd rather call Catherine Weaver off spring because of that.

I guess john henry's immorality is a failure of it's surroundings not a failure of its self.
it doesn't see the world in the same way as we do.
it's our own inablity to treat john henery as a sentient being that provoces it nothing more.
What? Our own inn ability to treat John Henry as a sentient being? What does that mean? How was he not treated as a sentient being? Have you seen the show?