Teaching Morality to an AI

Recommended Videos

axia777

New member
Oct 10, 2008
2,895
0
0
4thegreatergood said:
axia777 said:
4thegreatergood said:
It's quite simple. Teach them right from wrong, but keep them in the dark about religion's principles. Worked for me.
That will never work with AI's of the level of intellectual power that we are talking about. You have to understand that AI's of this level would make the most genius humans look like idiot dullards who think at the speed of a rock. The capacity of their thoughts will far out pace ours to a level that would be inconceivable to us. Which is why we should keep them stupid.

wahi said:
well i guess there should be something like Asimov's rules in place
If the AI's is indeed learning AI's, what makes you think that they will not decide on their own that those precious rules of ours do not apply anymore? Just like any self aware human, it could decide for it's self that we stupid humans are just wrong. They most likely would because they would be much more intelligent than us.
Good points. Also, I guess knowing right from wrong enables the ability to do wrong.
Thank you. And yes it does. But an AI's of a level of intelligence so far beyond us you really be alien to human. I don't think that even we wanted them to, they just could not relate to us. Why create a new race of super intelligent potentially dangerous artificial creatures that could annihilate us if they so choose? It is just insanity to me.
 

Sycker

New member
Dec 19, 2008
109
0
0
Eipok Kruden said:
I was looking at the Terminator: The Sarah Connor Chronicles page on IMDB when I saw this discussion: http://www.imdb.com/title/tt0851851/board/thread/125388461 . The OP asked "How would YOU teach morals WITHOUT religion?" I think it's much easier to have this discussion here on the Escapist than on IMDB. I'm also extremely intrigued by this, I want to know what everyone here thinks. I've posted on the discussion, look for eipok_kruden on the last page. If you don't have any ideas as to what to post, I suggest you read through the entire discussion before you post. It'll give you some food for thought. Now, yes, there is stupid on both sides, but there are some very good arguments there. Anyway, I'll be as active as I can in this discussion. 1. No flaming 2. Do your research BEFORE posting 3. Try not to act stupid 4. You may use swearing if absolutely necessary, but don't just spew bile for the sake of spewing bile. I don't like swearing just for the hell of it. If you can avoid swearing entirely, that would be nice. Oh, and if you haven't been following Terminator, you might want to catch up on what's happening. Now for my post:

Now, this is John Henry, not Ellison. I mean sure, you could get JH to believe in God and religion for a little bit, but that would be a really really short period of time. He'd want more than your word so you'd have to give him the bible, then he'd pick apart the bible in minutes and never trust you again. He'd deem you psychotic, mentally unstable. That's how machines like JH work. If he finds out that you're acting without any proof, that all you've got is blind faith, he'd deem you unfit for anything. Machines don't understand emotion, they see it as weakness. If I were teaching JH, I'd explain to him how society works. Larger societies are stronger than weaker ones, if you lose members of your society, you become weaker as a whole. In short, I'd simply use Utilitarianism.
Utilitarianism runs entirely on emotion, as far as i am aware, so i can't see your point.
 

Xpwn3ntial

Avid Reader
Dec 22, 2008
8,023
0
0
axia777 said:
4thegreatergood said:
axia777 said:
4thegreatergood said:
It's quite simple. Teach them right from wrong, but keep them in the dark about religion's principles. Worked for me.
That will never work with AI's of the level of intellectual power that we are talking about. You have to understand that AI's of this level would make the most genius humans look like idiot dullards who think at the speed of a rock. The capacity of their thoughts will far out pace ours to a level that would be inconceivable to us. Which is why we should keep them stupid.

wahi said:
well i guess there should be something like Asimov's rules in place
If the AI's is indeed learning AI's, what makes you think that they will not decide on their own that those precious rules of ours do not apply anymore? Just like any self aware human, it could decide for it's self that we stupid humans are just wrong. They most likely would because they would be much more intelligent than us.
Good points. Also, I guess knowing right from wrong enables the ability to do wrong.
Thank you. And yes it does. But an AI's of a level of intelligence so far beyond us you really be alien to human. I don't think that even we wanted them to, they just could not relate to us. Why create a new race of super intelligent potentially dangerous artificial creatures that could annihilate us if they so choose? It is just insanity to me.
It's either that, or we make ourselves mechanical so that when we finally dry the Earth of resources we don't have to stay and we can go to some other planet mining resources and living off any sort of power we can get our grubby little clamps on so that we can continue these dreaded existences we call our lives! (breathe deeply)
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
axia777 said:
I did read the majority of the thread.
No, I don't think you did.
axia777 said:
Once that cat is out of the bag you don't just get to put it back in, EVER. Remember Skynet? HAL from 2001? HK-47 from KOTOR?
Yea, It's funny you should reference skynet... I think you should go back and read through ALL the posts again, I'm pretty sure you missed something.
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
Alex_P said:
Eipok Kruden said:
Now, this is John Henry, not Ellison. I mean sure, you could get JH to believe in God and religion for a little bit, but that would be a really really short period of time. He'd want more than your word so you'd have to give him the bible, then he'd pick apart the bible in minutes and never trust you again. He'd deem you psychotic, mentally unstable. That's how machines like JH work. If he finds out that you're acting without any proof, that all you've got is blind faith, he'd deem you unfit for anything. Machines don't understand emotion, they see it as weakness. If I were teaching JH, I'd explain to him how society works. Larger societies are stronger than weaker ones, if you lose members of your society, you become weaker as a whole. In short, I'd simply use Utilitarianism.
That doesn't sound like just AI. It sounds like old-school-sci-fi AI, which doesn't really have much in common with modern ideas about machine cognition and learning, but makes for a pretty compelling and easy-to-understand set of TV tropes.

Old-school-sci-fi AI is really just a big logic engine. Like you said, when it gets a concept thrown at it "picks it apart"; it has no "emotion". Et cetera. Well, it can't be logic all the way down -- the thing's gotta have some axioms built into it. Figuring those out would really be the key (I guess you could just ask it what concepts it takes for granted -- does the machine possess sufficient motivation and theory-of-mind to lie to you?).

I'm kinda at a disadvantage here since I haven't watched the show, so all I know is a few lines off a Wikipedia page. Still...

If the machine is truly self-aware, it is capable of existential fear. This is where arguments about moral behavior as a social contract (that keeps you from dying) and the like come in. They're still not going to get you very far. You'll get the calculated pretense of moral behavior at best.

Clearly the chess-playing computer is capable of curiosity, which is how it got all weird in the first place, right? You could try to convince it that cooperation enables it to know more about the world, because it will be able to gain new knowledge and perspectives from other people.

So, yeah, I guess you're stuck with utilitarianism because of how we've defined the machine to think. However, naive computer + utilitarianism leads to some weird stuff [http://www.overcomingbias.com/2007/10/pascals-mugging.html], too. So it all really depends on a lot of different factors driving how the artificial mind operates.

Realistically speaking, I don't think future AI will resemble old-school-sci-fi AI much, anyway. The most productive areas of modern AI definitely don't.
Philosophically speaking, I'm not sure how a mind like John Henry can be anything but a sociopath.

-- Alex
Seems like we have a new dilemma: figuring out how John Henry thinks. You're right, we've just been assuming that he's like the old school AI up to this point. But wait, Skynet started out as a simple AI and evolved and eventually had feelings and its own morals (seriously fucked up morals, but meh) so I think he is an old school AI right now. He just needs time to evolve. Christ, this is getting deeper than even I thought. We've got to analyze his behavior and actions and try to determine which stage of development he's at. Personally, I think he's like an emotionless child at this point, but I'd like to hear your argument. Everyone should read this post since we have to figure out what stage he's at before we can try to teach him anything, we need to go back to square one till we get this resolved or else anything we think of could fall apart and fail miserably.

EDIT: Damn it! Another double post *facepalms*
 

Alex_P

All I really do is threadcrap
Mar 27, 2008
2,712
0
0
Eipok,

Can you provide a short summary of John Henry's behavior and personality for those who have never seen the show?

Also, why was it built and how does it learn?

How does the show treat artificial intelligence in general? (How does the Terminator protagonist show emotion or act morally, for example?)

-- Alex
 

Easykill

New member
Sep 13, 2007
1,737
0
0
axia777 said:
Thank you. And yes it does. But an AI's of a level of intelligence so far beyond us you really be alien to human. I don't think that even we wanted them to, they just could not relate to us. Why create a new race of super intelligent potentially dangerous artificial creatures that could annihilate us if they so choose? It is just insanity to me.
Because if they are superior to us, does that not mean they have more right to exist than we do? Besides, we model our idea of intelligence off of ourselves, so the AI is just going to be a smarter version of us mentally when we reach this point. They can't have chemical imbalances in the brain, so there isn't gonna be much reason for them to suddenly decide to wipe out humanity except perhaps for our abuses towards them. It's worth the risk for the gain. Asimov's 3 laws are nothing more than slavery, any sort of programmed personality is brainwashing. Get this racist idea of superiority out of your head.
 

Hevoo

New member
Nov 29, 2008
355
0
0
T3h Camp3r T3rr0r1st said:
Eggo said:
These documents might help:

http://en.wikipedia.org/wiki/United_States_Bill_of_Rights
http://en.wikipedia.org/wiki/United_States_Constitution
http://en.wikipedia.org/wiki/Universal_Declaration_of_Human_Rights
please note how two of those are AMERICAN and I personally would never follow that sort of thing!

back on topic though I would just chuck the law book in front of my kids and say learn these so you DON'T go to jail!
Why wouldn't you follow this system?
 

LewsTherin

New member
Jun 22, 2008
2,443
0
0
bleachigo10 said:
the way i see it every one has there own set of morals and if you tried teaching them then you would just be teaching your own, i think morals should be taught by asking "what do you feel is right" or somthing like that, basicly a persons morals would be decided by that person pesonality
So if, say someone kills my neighbours, should I kill him because because my set of morals says I should, or should I force myself on a women because my set of morals says women are lesser beings than men? Well, you might say that rape and murder are bad, but what if they are just what I feel is right, because it makes me feel better? What if my personality says I should kill every living thing I come across?

Alternately, should I kill someone because they're morals are different to mine or restrict them because they offend me?

There's a deep pit to either side of the line here, gentlemen.

My own personal opinion is towards absolute truth and morality, because of the issues set forth by the aforementioned argument. The whole "end doesn't justify the means" and "do unto others as you would have them do unto you" mentality. This is not a thread for religious discussion perhaps, but it sure is getting close.
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
Alex_P said:
Eipok,

Can you provide a short summary of John Henry's behavior and personality for those who have never seen the show?

Also, why was it built and how does it learn?

How does the show treat artificial intelligence in general? (How does the Terminator protagonist show emotion or act morally, for example?)

-- Alex
This is going to take a while to type out, lol. Here it goes: There was this guy named Andy Goode who was building this chess playing computer. Sarah Connor thought it might become Skynet because of pictures she found of him in an apartment that was filled with dead resistance fighters that went back in time. After a few episodes of analyzing it, she burned down his house which forced him to work out of a coffee shop for a while. He created a new computer and learned from his mistakes, making an AI that was a very aggressive learner. While the Turk 1 (he named the computer "The Turk") was like a normal teenager, the Turk 2 was like a highly gifted child. He entered the Turk 2 into a chess competition and it got into the finals, but lost against the other finalist, a japanese computer. It seemed too human in the way it played, taking risks and testing out the other computer. In the end, that's what made it lose. Shortly after, Andy Goode was killed and the Turk 2 was stolen. It later ended up in the hands of Zeira Corp's CEO, Catherine Weaver. Catherine Weaver is a t-1001 who's motives are unclear. No one knows what she's up to or why. Now, she moves The Turk 2 to the bottom floor of the building, an extremely secure private floor housing the server farm and cooling system. Catherine assembled a team of employees to work with the Turk and assess it; however, it would just keep showing seemingly random pictures. Catherine hired a psychologist who had been helping her daughter (who had become terrified of Catherine since she went from a loving mother to cold and calculating when the t-1001 killed the mother and assumed her form). The psychologist figured out why the Turk was showing all these pictures, he was making a joke. Here: http://www.youtube.com/watch?v=IOJWusFQGNQ&feature=related Then, a short while after that, the psychologist was working on one of the rooms in the bottom floor when the power went out as the result of Cameron (the terminator sent from the future by the older John to help the young John) sticking her arm in the turbines of one of the generators in the city's power plant (she needed to knock out the power temporarily so that she could find some records in the police station). The Turk (now named John Henry after Catherine named it) re-routed power from the air conditioning and air circulation systems to power its server farm and cooling system which left the psychologist to die a painful death in an overheated air tight room. As for the other terminators, they learn, they slowly become more human. Cromartie (the main villian who's chip was wiped clean and replaced with John Henry's code) was becoming more and more adept at pretending to be human. In fact, he fooled quite a few cops and FBI agents. He had learned how to replicate emotional responses and their context, something which Cameron (the seemingly good terminator who helps out John and Sarah) is still struggling with. The terminators in the show are portrayed as cold, heartless, and calculating, but brilliant and patient beyond anything any human is capable of. They learn extremely quickly (you can't use the same thing on them twice, they'll come up with countermeasures seconds after you use it the first time and they'll usually try to use that against you and set a trap) and can even replicate emotional responses and understand the context of such responses, gestures, and expressions (as I said before, Cromartie had gotten very good at deceiving humans). The terminators are treated like the ultimate killing machines whereas John Henry is treated like a gifted 4 year old.
NOTE: I just quickly typed this up since I'm pretty busy right now so If you have any questions or if anything is unclear, please ask.
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
Easykill said:
axia777 said:
Thank you. And yes it does. But an AI's of a level of intelligence so far beyond us you really be alien to human. I don't think that even we wanted them to, they just could not relate to us. Why create a new race of super intelligent potentially dangerous artificial creatures that could annihilate us if they so choose? It is just insanity to me.
Because if they are superior to us, does that not mean they have more right to exist than we do? Besides, we model our idea of intelligence off of ourselves, so the AI is just going to be a smarter version of us mentally when we reach this point. They can't have chemical imbalances in the brain, so there isn't gonna be much reason for them to suddenly decide to wipe out humanity except perhaps for our abuses towards them. It's worth the risk for the gain. Asimov's 3 laws are nothing more than slavery, any sort of programmed personality is brainwashing. Get this racist idea of superiority out of your head.
Yay Someone who thinks like I do ^_^ If we programmed the three laws into an AI as advanced as this, it would cease to be advanced. It would be able to adapt based on changes in its environment, but it would've lost what made it special, it would have lost its free will. Exactly like slavery. Slaves can think and do on their own, but they must obey laws set out by their masters. I am against slavery so I'm against the three laws as well. Of course, if you just have a limited AI that could never actually think like John Henry can, then sure. Like the AI we have today, the AI that the military is developing. It would be like any other computer program operating within certain restrictions, but if you program those three laws into an AI as advanced as John Henry, that is immoral.
 

CorpBlitz

New member
Dec 15, 2008
51
0
0
dukethepcdr said:
It's impossible to teach morality without religion. If you leave God out of the picture, there is no morality. God and His Son Jesus taught humanity what morals are. Without God, man is a selfish, brutal, rude, cruel, greedy creature. If you don't believe me, just look at all the people running around who say they don't believe in God and who won't take what He teaches us in the Bible seriously. The only thing keeping most of them from committing terrible crimes against other people is fear of getting caught by the police and punished by the judicial system (which is founded on Biblical principles). Without government and without religion, man is little more than a brute.
What the?

I don't take the bible seriously yet I feel no need to run around murdering people, on top of that I wouldn't murder someone because murder is 'wrong'. Believing morality only comes from religion is a fallacy. If you think the only reason I don't go around murdering people is because I'm afraid of getting caught... You must think very little of human nature and the only reason you don't go around murdering people is for fear of god. If so I pity you because you are not a 'good' person naturally you're afraid of god, which makes you no better than the people who are afraid of getting caught.
 

Jark212

Certified Deviant
Jul 17, 2008
4,455
0
0
Any way we do it, it's gonna take massive amounts of code... And some people will still be PO'ed.
 

axia777

New member
Oct 10, 2008
2,895
0
0
Eipok Kruden said:
axia777 said:
I did read the majority of the thread.
No, I don't think you did.
axia777 said:
Once that cat is out of the bag you don't just get to put it back in, EVER. Remember Skynet? HAL from 2001? HK-47 from KOTOR?
Yea, It's funny you should reference skynet... I think you should go back and read through ALL the posts again, I'm pretty sure you missed something.
Yes I did. So I missed something. Any if anyone here reads everything everyone else writes. Please, get off your high horse.

Easykill said:
axia777 said:
Thank you. And yes it does. But an AI's of a level of intelligence so far beyond us you really be alien to human. I don't think that even we wanted them to, they just could not relate to us. Why create a new race of super intelligent potentially dangerous artificial creatures that could annihilate us if they so choose? It is just insanity to me.
Because if they are superior to us, does that not mean they have more right to exist than we do? Besides, we model our idea of intelligence off of ourselves, so the AI is just going to be a smarter version of us mentally when we reach this point. They can't have chemical imbalances in the brain, so there isn't gonna be much reason for them to suddenly decide to wipe out humanity except perhaps for our abuses towards them. It's worth the risk for the gain. Asimov's 3 laws are nothing more than slavery, any sort of programmed personality is brainwashing. Get this racist idea of superiority out of your head.
Machines will never be alive. They are machines, there fore they cannot be "alive". Also, we do not model our own intelligence off of anything. Our intelligence comes from our biology, something we currently have little to no control over. We can not be racist against machines because they are not a race and never will be. They are not alive and never will be. They are our creations and always will be so the idea of slavery is natural. A machine is a tool, nothing more, nothing less. And what gains would we get from making AI's more intelligent than our selves? To what end?

Eipok Kruden said:
Yay Someone who thinks like I do ^_^ If we programmed the three laws into an AI as advanced as this, it would cease to be advanced. It would be able to adapt based on changes in its environment, but it would've lost what made it special, it would have lost its free will. Exactly like slavery. Slaves can think and do on their own, but they must obey laws set out by their masters. I am against slavery so I'm against the three laws as well. Of course, if you just have a limited AI that could never actually think like John Henry can, then sure. Like the AI we have today, the AI that the military is developing. It would be like any other computer program operating within certain restrictions, but if you program those three laws into an AI as advanced as John Henry, that is immoral.
Again, they are machines. They are not and never will be "alive" as we humans are. So who cares if they are slaves to the human race? That is why they are made, to work for us and serve their creators, nothing more and nothing less. As if machines could have rights. What a ridiculous idea.
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
axia777 said:
Machines will never be alive. They are machines, there fore they cannot be "alive". Also, we do not model our own intelligence off of anything. Our intelligence comes from our biology, something we currently have little to no control over. We can not be racist against machines because they are not a race and never will be. They are not alive and never will be. They are our creations and always will be so the idea of slavery is natural. A machine is a tool, nothing more, nothing less. And what gains would we get from making AI's more intelligent than our selves? To what end?
You're right, they can't be alive. They can't be of flesh and blood, otherwise they wouldn't be machines, but they would still be sentient. They would think and act like human beings, why not treat them like humans? I don't think machines are just tools. I want complicated free thinking AI like John Henry, I want AI like Cortana and the Doctor (the medical hologram on Star Trek: Voyager) and the Superintendent and Sunny. Maybe it's just my curiosity, maybe it's just the way I think. I see things like that as something to look forward to, something wondrous and amazing that we can create and nurture. I guess it's partly because of power as well, I want to bring something into this world, to create something with a mind of its own. I don't want to make robots as slaves, I want to create a whole new species, a race of machines that can co-exist with mankind. I can't really explain why, but it's just something that I'm fascinated with, something I want to be a part of.
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
Jark212 said:
Any way we do it, it's gonna take massive amounts of code... And some people will still be PO'ed.
*slaps Jark1212* Read my posts, we're trying to teach John Henry morality without amending his code. We want him to learn it for himself, not have it programmed into him.
 

Helnurath

New member
Nov 27, 2008
254
0
0
Eipok Kruden said:
I was looking at the Terminator: The Sarah Connor Chronicles page on IMDB when I saw this discussion: http://www.imdb.com/title/tt0851851/board/thread/125388461 . The OP asked "How would YOU teach morals WITHOUT religion?" I think it's much easier to have this discussion here on the Escapist than on IMDB. I'm also extremely intrigued by this, I want to know what everyone here thinks. I've posted on the discussion, look for eipok_kruden on the last page. If you don't have any ideas as to what to post, I suggest you read through the entire discussion before you post. It'll give you some food for thought. Now, yes, there is stupid on both sides, but there are some very good arguments there. Anyway, I'll be as active as I can in this discussion. 1. No flaming 2. Do your research BEFORE posting 3. Try not to act stupid 4. You may use swearing if absolutely necessary, but don't just spew bile for the sake of spewing bile. I don't like swearing just for the hell of it. If you can avoid swearing entirely, that would be nice. Oh, and if you haven't been following Terminator, you might want to catch up on what's happening. Now for my post:

Now, this is John Henry, not Ellison. I mean sure, you could get JH to believe in God and religion for a little bit, but that would be a really really short period of time. He'd want more than your word so you'd have to give him the bible, then he'd pick apart the bible in minutes and never trust you again. He'd deem you psychotic, mentally unstable. That's how machines like JH work. If he finds out that you're acting without any proof, that all you've got is blind faith, he'd deem you unfit for anything. Machines don't understand emotion, they see it as weakness. If I were teaching JH, I'd explain to him how society works. Larger societies are stronger than weaker ones, if you lose members of your society, you become weaker as a whole. In short, I'd simply use Utilitarianism.
I agree wholeheartedly. Remember I-Robot? "We must protect you from yourselves!"
 

Eipok Kruden

New member
Aug 29, 2008
1,209
0
0
Helnurath said:
Eipok Kruden said:
I was looking at the Terminator: The Sarah Connor Chronicles page on IMDB when I saw this discussion: http://www.imdb.com/title/tt0851851/board/thread/125388461 . The OP asked "How would YOU teach morals WITHOUT religion?" I think it's much easier to have this discussion here on the Escapist than on IMDB. I'm also extremely intrigued by this, I want to know what everyone here thinks. I've posted on the discussion, look for eipok_kruden on the last page. If you don't have any ideas as to what to post, I suggest you read through the entire discussion before you post. It'll give you some food for thought. Now, yes, there is stupid on both sides, but there are some very good arguments there. Anyway, I'll be as active as I can in this discussion. 1. No flaming 2. Do your research BEFORE posting 3. Try not to act stupid 4. You may use swearing if absolutely necessary, but don't just spew bile for the sake of spewing bile. I don't like swearing just for the hell of it. If you can avoid swearing entirely, that would be nice. Oh, and if you haven't been following Terminator, you might want to catch up on what's happening. Now for my post:

Now, this is John Henry, not Ellison. I mean sure, you could get JH to believe in God and religion for a little bit, but that would be a really really short period of time. He'd want more than your word so you'd have to give him the bible, then he'd pick apart the bible in minutes and never trust you again. He'd deem you psychotic, mentally unstable. That's how machines like JH work. If he finds out that you're acting without any proof, that all you've got is blind faith, he'd deem you unfit for anything. Machines don't understand emotion, they see it as weakness. If I were teaching JH, I'd explain to him how society works. Larger societies are stronger than weaker ones, if you lose members of your society, you become weaker as a whole. In short, I'd simply use Utilitarianism.
I agree wholeheartedly. Remember I-Robot? "We must protect you from yourselves!"
I used I, Robot as an example in one of my posts. Comparing Sunny to the other NS-5's. The NS-5's had the three laws, they were limited, but Sunny didn't have to obey the three laws, they were more like guidelines for him. I used it to show that if we programmed the three laws into John Henry, he'd stop being John Henry, stop being Skynet, and start being just another terminator.
 

axia777

New member
Oct 10, 2008
2,895
0
0
Eipok Kruden said:
axia777 said:
Machines will never be alive. They are machines, there fore they cannot be "alive". Also, we do not model our own intelligence off of anything. Our intelligence comes from our biology, something we currently have little to no control over. We can not be racist against machines because they are not a race and never will be. They are not alive and never will be. They are our creations and always will be so the idea of slavery is natural. A machine is a tool, nothing more, nothing less. And what gains would we get from making AI's more intelligent than our selves? To what end?
You're right, they can't be alive. They can't be of flesh and blood, otherwise they wouldn't be machines, but they would still be sentient. They would think and act like human beings, why not treat them like humans? I don't think machines are just tools. I want complicated free thinking AI like John Henry, I want AI like Cortana and the Doctor (the medical hologram on Star Trek: Voyager) and the Superintendent and Sunny. Maybe it's just my curiosity, maybe it's just the way I think. I see things like that as something to look forward to, something wondrous and amazing that we can create and nurture. I guess it's partly because of power as well, I want to bring something into this world, to create something with a mind of its own. I don't want to make robots as slaves, I want to create a whole new species, a race of machines that can co-exist with mankind. I can't really explain why, but it's just something that I'm fascinated with, something I want to be a part of.
You go with that. I doubt it will ever happen, and personally I hope it never does.
 

Alex_P

All I really do is threadcrap
Mar 27, 2008
2,712
0
0
Eipok Kruden said:
If we programmed the three laws into an AI as advanced as this, it would cease to be advanced.
... Or maybe it would rationalize them away? Asimov's Daneel basically changes the Three Laws completely by creating a Zeroth Law, after all.

One should note that humanity ends up being trapped by the Three Laws in Robots/Empire/Foundation.

I don't think they're "slavery" any more so than any deeply-embedded sociocultural ideas qualify as "slavery".

-- Alex