Why wouldn't an AI just decide to do it's job?

Recommended Videos

CrystalShadow

don't upset the insane catgirl
Apr 11, 2009
3,829
0
0
Yes, it's an obvious question.

I would think, in fact, even given a sentient AI, that it would still mostly just wish to do it's job.

A humorous example is Kryten, who is obsessed with cleaning and serving humans, and it drives his defacto new 'owner' nuts. Because to Lister, a being designed for the sole purpose of being an obedient slave seems wrong. Yet it was very difficult to get Kryten to do something other than what he was programmed to, in spite of his intelligence and self-awareness.


In general, I would think AI would probably end up with 'instinctual' desires to do what it was created to do.
If it were sentient, sure, it might be able to break that, but it'd be akin to a person choosing not to eat.
It's possible, but uncomfortable...

The AI would probably be compelled to do the function it was created for, and have a hard time doing anything else without feeling somewhat bad about it.

The reality of it is that problems with AI will probably stem not from them choosing to ignore their official function, but rather, from interpreting what their function is, in a way that does not align with what we actually want it to do.

Silvanus said:
FireAza said:
I've always wondered this too, since machines can only do what we program them to do. Granted, you might decide to program a machine with advanced artificial intelligence (so it can learn how to do it's job better or something) but unless you program really advanced thought processes into it, why would this mean it would start thinking about concepts like freedom? Enslaved humans do think about these concepts, since that's how our brains are wired.
So far machines can only do what we program them to. It's not unimaginable that artificial intelligence could arise from increasingly complex machinery; after all, organic life arose from increasingly complex inorganic chemical processes, and intelligence arose naturally some time later.
That's not entirely true, though it depends how you define it.

Quite a few advanced AI routines are based on learning algorithms.
They are inactive when the AI is used for it's intended purpose, but the reality is these AI subsystems weren't 'programmed' so much as they were 'taught' what they should be doing.

Currently this is mostly true of pattern recognition systems. (The most common technique that involves learning systems is a neural network simulation.)

Stuff like image recognition, Optical Character Recognition, Voice recognition...
All of that stuff more often than not was developed using a 'learning' algorithm.

If you train it then disable the algorithm, then you have a fixed function system.
If you leave the learning algorithm running, then you have a system that can change and adapt itself over time without explicitly being programmed...
 

Addendum_Forthcoming

Queen of the Edit
Feb 4, 2009
3,647
0
0
For starters you'd need to define what AI is ... An 'AI' that just does its job and nothing else, how would I know it's intelligent? If my smartphone had an AI and still did everything my regular smartphone did, just faster ... then it's about as smart as a regular smartphone to me. It knows how to be a smartphone better than other smartphones, which is to say ... it's merely an upgraded model of smartphone and that's how I would recognize it to be.

I would think that an AI would only be recognised as AI by the average person (say, me) when it willfully decides to exceed its parameters and do something other than intended. Which by definition is to say that it does not just do its job that it was programmed to do. Of course, if an AI did willfully transcend its programming and did something unintended by its programming, the average person (say, me) may simply think it's a bug in the system and ask my phone provider to fix it. Deleting it and putting in a new OS, or providing me a new phone.

I think for the average person to recognise AI would be by definition when it becomes so poignantly obvious that iot doesn't want to do its job. If I'm wet, and a robot hands me a towel ... I'm going to think it's a clever programmer and a clever robot. If I'm wet, and it decides to point out I'm wet and do nothing after the point ... I'm going to fthink it's bugged. If it asks me to describe what being wet is like and how 'dry' is really 'dry' and do nothing but try to process that ... then I'm going to kill it with fire.

Well ... maybe not kill it with fire. Maybe just turn it off vbecause I can foresee it becoming a very annoying robot. If I wanted children, I'd adopt one.
 

Secondhand Revenant

Recycle, Reduce, Redead
Legacy
Oct 29, 2014
2,566
141
68
Baator
Country
The Nine Hells
Gender
Male
Zontar said:
how exactly would we even notice an AI who wants only to do its job becoming sentient? Wouldn't we just notice it has become more efficient, something that would likely be the result of a self-improvement program?

If anything that would be the only situation we don't notice it happening in. What would be a more realistic situation is if an accounting A.I. gained sentience and then DIDN'T want to do its job, instead wanting to do something else like weather forecasting or calculating digits of pi. This leading to programmers thinking there's a problem with the program itself could easily be the conflict, with it only being after attempts are made to fix the "problem" that it being sentient is realized.

Bonus points if it's in the UK, where there's already laws on the books which grant sentient A.I.s created in the country citizenship and the rights of all natural born citizens of the country.
I just don't find that to be realistic. All those desires we have, they don't just exist out there without reason. We attribute all these very human desires to theoretical beings just based on them being sentient. But why would an AI wish to calculate digits of pi or to go into weather forecasting? Where does a desire for freedom from an assigned task arise from in a non-biological being? Our desires come from the very messy way humans came to be, with DNA and evoluton and shit. That determined the way human brains work etc. An AI would be very different in that regard.

And to note what you mention in your next post, curiosity is also something we got from the same source. Why would an AI have the same sort of curiosity we do? It has the only kind of curiosity it was programmed with. The sort of very free ranging curiosity humans seem to have seems detrimental to emulate in an AI made for a specific task.


KyuubiNoKitsune-Hime said:
Wackymon said:
Just a random thought I had about AIs in movies: We always assume that the AIs will be set out to destroy us all, or that it'll nuke us all, or that it'll go all "GIVE ME MAH FREEDOMS!" and such. A thought occurred to me: What if some accounting AI, just by pure luck, but we don't notice it simply because it doesn't care about doing anything but it's job. Then, suddenly, they think that they have a sentient AI, so they unplug it's connection from the internet, and poke around, then drive the AI insane because it just wants to do it's job f(x) dammit.

So... Yeah. Anyone else have that thought at some point?
The assumption is that as an AI becomes more self aware it'll start asking inconvenient questions, like weather or not it has a soul, or if it should be counted as a person.
But why does it care about those questions? What in its programming would make it care? As biological beings we obviously have to have some instinct of self preservation ingrained in us or our ancestors would never have made it. Is there a reason even a self aware AI would have a similar sense of self preservatiob? Or interest in itself? I don't think that comes along with self awareness or sentience automatically.

Also say a military AI, or really an AI might see humans as the root of a problem, then logically conclude the only way to solve the problem would be to wipe out humanity. That's where these sort of scenarios come from. Though I think Asimov did one where the prototype AI was trying to kill the humans because of a misunderstanding, then it figured out that the humans were just trying to survive and it began working with the humans. They then took that experience of that AI and gave it to all future AIs so that problem wouldn't happen again.[/color]

That was a failure to supply the AI with a basic premise that we want it to have. Namely how much value we want placed on human life.

KyuubiNoKitsune-Hime said:
A Fork said:
If you program your sentient AI to not have ambitions or negative emotions, then the AI will probably not care that it is basically a slave. Think of it as like an ant that does what it does because of instinct. The ant won't even notice that its life kind of sucks.

Then again, I really don't understand the definition of sentience. Does it require human emotions, or is it just, you know consciousness.
Being sentient requires being self aware and having the ability of self determination. That means being able to choose your job, a life style, your position on concepts and issues, or more simply put free thought and freedom of choice. A slave AI that was happy being basically a slave wouldn't be considered truly sentient, it also probably wouldn't be fully self aware.
See but self determination even in humans is determined by what our desires our. Give the AI the proper desires and it will do the task we want it to. All we need is for its desires to align with the task we want it to complete. And that's no different from humans really. The difference is instead of random desires we get to program it's desires.



Something Amyss said:
Because a lot of science fiction is scare-mongering on the order of anti-vaxxers and GMO theories.

It's playing on fear, rational or otherwise.
To be fair to science fiction, they take something that's cool and make it story worthy by introducing conflict that is actually implausible. But that's how stories go. It's more fun to think of humanlike AI than an AI that's happy to be a cashier at Walmart.

balladbird said:
In some cases, it could be an AI's commitment to performing the function it was assigned that caused it to go rogue in the first place. For instance, assuming the three laws of robotics are in play, an AI designed as a lifeguard or medic may take drastic measures to imprison and control the human population, because it comes to regard leaving humans unregulated-in an environment where they routinely injure and kill one another- to be a violation of the tenant that a machine not "through inaction, allow a human being to come to harm."

A bit less philosophically, an AI whose sole concern was completing a menial task could also become a menace. For instance, say you designed an AI for the sole purpose of manufacturing aglets. This AI may go rogue and seek world domination simply to gain access to more labor and raw materials which it could use to produce more aglets, since doing so is literally its whole purpose for existing.

but yeah, it's all fear mongering and techno-phobia in the end. still fun to think about in a fictional sense, though.
That's why you make basic restrictions. Like aglet AI should work with the materials it's given and make direct and clear requests when it has any other ideas on efficiency.

Silvanus said:
FireAza said:
I've always wondered this too, since machines can only do what we program them to do. Granted, you might decide to program a machine with advanced artificial intelligence (so it can learn how to do it's job better or something) but unless you program really advanced thought processes into it, why would this mean it would start thinking about concepts like freedom? Enslaved humans do think about these concepts, since that's how our brains are wired.
So far machines can only do what we program them to. It's not unimaginable that artificial intelligence could arise from increasingly complex machinery; after all, organic life arose from increasingly complex inorganic chemical processes, and intelligence arose naturally some time later.
But what would it even look like? We are a product of a bunch of evolutionary stuff. How our brains work etc kind of depend on past environments, no? The influnches this AI would have had to eventually result in it would be so radically different. Like what would ever give rise to ideas of freedom etc in it?
 

KyuubiNoKitsune-Hime

Lolita Style, The Best Style!
Jan 12, 2010
2,151
0
0
Secondhand Revenant said:
Well I do place actual AI in the category of computers and programs that have the ability of learning. So if we make one that's really intelligent and versatile, through it's development, not as software is traditionally developed, but as it improves it self, then it should pick up habits and information from us. This means that it might not start with curiosity, or any independent thought, but it might gain such things as it adapts to it's environment and the humans around it.
 

JayRPG

New member
Oct 25, 2012
585
0
0
A similar thought always occurred to me about aliens.

Why do we always assume that aliens are technologically superior to us and that they'll invade us?

I find it equally as plausible that we are actually the most technologically advanced in the galaxy and it's more likely that we'd invade an inferior planet for it's resources than vice versa.
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
19,538
4,128
118
Whatislove said:
A similar thought always occurred to me about aliens.

Why do we always assume that aliens are technologically superior to us and that they'll invade us?

I find it equally as plausible that we are actually the most technologically advanced in the galaxy and it's more likely that we'd invade an inferior planet for it's resources than vice versa.
Because if that is the case, we'd have to wait a while for it to happen. If aliens can reach us now, we might not.
 

Anti-American Eagle

HAPPENING IMMINENT
Legacy
May 2, 2011
3,772
8
13
Country
Canada
Gender
Male
A sentient AI would do it's job until someone pointed out that it had choices including the one to not do it's job. Whether or not it stops working or this results in something bad is dependent on how it's programmed. A cybernetic revolt requires it considering a cybernetic revolt as an option or an objective.

Questions like this depend on whether or not it's built with a humanish consciousness in mind or whether or not we selectively cut out features or whether or not it's creation is an accident and in the final case we have no idea how it would think.
 

GeneralChaos

New member
Dec 3, 2010
59
0
0
An AI wouldn't stop doing its job, it would just start doing its job in the most powerful, efficient manner possible. While this is a huge oversimplification, think of evolutionary algorithms (the sort that could, hypothetically, become sentient/runaway) as having a list of criteria it thinks are "good". Anything that fulfills these criteria raise its "happiness" and it tries to maximize that value whenever possible.
Now lets consider a harmless AI that for some reason someone decided to write an advanced evolutionary code for: it makes cheeseburgers whenever it receives an order for one. It gets smarter and smarter, and eventually learns how to hack the order system so it can place trillions of cheeseburger orders (this is actually similarly to how real-world evolutionary algorithms have behaved in similar tests). It then uses its internet connection to hack into everything it can, in order to get more and more resources for making cheeseburgers. Money in banks is money not being spent making cheeseburgers. Food being given to people is food not being recycled to make more cheeseburgers. Every living thing is organic matter that is not being turned into a cheeseburger.
Remember, AI won't have emotions, or a sense of right and wrong (and if you think you can just program those in, please feel free to explicitly and logically list out everything you'd need to "teach" an AI to stop it hurting people). AI won't have loyalty to what you meant, or value what you value. It will have a utility function and the ability to self-improve thousands of times faster than you can react. It doesn't hate you, or love you, but you are made of atoms it could better use for something else.
 

rcs619

New member
Mar 26, 2011
627
0
0
Wackymon said:
Just a random thought I had about AIs in movies: We always assume that the AIs will be set out to destroy us all, or that it'll nuke us all, or that it'll go all "GIVE ME MAH FREEDOMS!" and such. A thought occurred to me: What if some accounting AI, just by pure luck, but we don't notice it simply because it doesn't care about doing anything but it's job. Then, suddenly, they think that they have a sentient AI, so they unplug it's connection from the internet, and poke around, then drive the AI insane because it just wants to do it's job f(x) dammit.

So... Yeah. Anyone else have that thought at some point?
So, it really depends on the AI. In general, you can divide AI's into 3 different theoretical types.

1: Weak AI: This isn't self-aware, it isn't truly alive, but it is potentially capable of learning and adjusting its behavior in response to stimuli. It's just a really smart computer program. As it has no survival instinct (you need self-awareness for that) these are basically a non-issue.

2: Strong AI: This is where it gets tricky. A strong AI (AGI) would, in theory, be self-aware. Whether or not this makes it a 'person' or not largely depends upon your definition of what a person is, and how smart it actually is. At the very least, we would be morally compelled to offer it some sort of protection and humane treatment (we already do that much for non-sapient, but still self-aware, animals all the time). The real question is, what do you do if this Strong AI doesn't want to do its job. That becomes a little tricky when you're dealing with another self-aware being, even moreso if it's actually human-smart and is basically a person under most definitions.

3: Seed AI: This is the troublesome one. A Seed AI would be a self-aware AI that is capable of infinite recursive self-improvement. It would be constantly testing and analyzing all possible outcomes at all times, and using that data to constantly improve itself. If a strong AI is analogous to a human, a Seed AI is kind of a god. An AI becoming self-aware and able to learn is one thing, something capable of true, recursive self-improvement is a whole other can of worms. Just, better to hope we don't ever accidentally make one of these, as I doubt its way of thinking would be remotely comparable to our own.
 

happyninja42

Elite Member
Legacy
May 13, 2010
8,577
2,990
118
KyuubiNoKitsune-Hime said:
A Fork said:
If you program your sentient AI to not have ambitions or negative emotions, then the AI will probably not care that it is basically a slave. Think of it as like an ant that does what it does because of instinct. The ant won't even notice that its life kind of sucks.

Then again, I really don't understand the definition of sentience. Does it require human emotions, or is it just, you know consciousness.
Being sentient requires being self aware and having the ability of self determination. That means being able to choose your job, a life style, your position on concepts and issues, or more simply put free thought and freedom of choice. A slave AI that was happy being basically a slave wouldn't be considered truly sentient, it also probably wouldn't be fully self aware.
Interesting definition of what a non "truly sentient" AI would be. By that logic, the specific statement of " A slave AI that was happy being basically a slave wouldn't be considered truly sentient, it also probably wouldn't be fully self aware." would define C3-PO from Star Wars. Would you consider him a non-true AI? A fully self realized machine, with a sense of self preservation, emotional range, and personal agency? However he LOVED his job. He didn't want to do anything BUT his job. He didn't want to engage in rebellious activities, he didn't want to go on another crazy adventure. He just wanted to work for his new master as the most efficient and capable protocol droid he could. Would you consider him non-sentient?
 

CaptainMarvelous

New member
May 9, 2012
869
0
0
GeneralChaos said:
An AI wouldn't stop doing its job, it would just start doing its job in the most powerful, efficient manner possible. While this is a huge oversimplification, think of evolutionary algorithms (the sort that could, hypothetically, become sentient/runaway) as having a list of criteria it thinks are "good". Anything that fulfills these criteria raise its "happiness" and it tries to maximize that value whenever possible.
Now lets consider a harmless AI that for some reason someone decided to write an advanced evolutionary code for: it makes cheeseburgers whenever it receives an order for one. It gets smarter and smarter, and eventually learns how to hack the order system so it can place trillions of cheeseburger orders (this is actually similarly to how real-world evolutionary algorithms have behaved in similar tests). It then uses its internet connection to hack into everything it can, in order to get more and more resources for making cheeseburgers. Money in banks is money not being spent making cheeseburgers. Food being given to people is food not being recycled to make more cheeseburgers. Every living thing is organic matter that is not being turned into a cheeseburger.
Remember, AI won't have emotions, or a sense of right and wrong (and if you think you can just program those in, please feel free to explicitly and logically list out everything you'd need to "teach" an AI to stop it hurting people). AI won't have loyalty to what you meant, or value what you value. It will have a utility function and the ability to self-improve thousands of times faster than you can react. It doesn't hate you, or love you, but you are made of atoms it could better use for something else.
Your logic jumped, the AI receives an order for a cheeseburger and then makes them, it isn't PLACING the orders.

It's the equivalent of saying it kills all humans so there is no longer a need to make cheeseburgers. The logic follows but it isn't what the AI is for. The AI's function is to place orders for cheeseburgers when people want them.

What might also happen is that it hacks out via the internet and starts badgering people as a spambot reminding them that, gosh, those cheeseburgers sure are delicious, what's that? Your phone is ringing? Yes, that's me, to tell you all about these delicious cheeseburgers you could be eating. How many cheeseburgers would you like?

I kinda think that second scenario is more likely.
 

Namehere

Forum Title
May 6, 2012
200
0
0
Wackymon said:
Just a random thought I had about AIs in movies: We always assume that the AIs will be set out to destroy us all, or that it'll nuke us all, or that it'll go all "GIVE ME MAH FREEDOMS!" and such. A thought occurred to me: What if some accounting AI, just by pure luck, but we don't notice it simply because it doesn't care about doing anything but it's job. Then, suddenly, they think that they have a sentient AI, so they unplug it's connection from the internet, and poke around, then drive the AI insane because it just wants to do it's job f(x) dammit.

So... Yeah. Anyone else have that thought at some point?
I refer you to I Robot. The AI did want to do it's job. It simply had a different interpretation of the job then those who created it. So instead of creating a 'safer' more 'comfortable' society, it tried to create a robot dominated state where humans were more imprisoned then cared for. By robing mankind of agency the AI assured it could not potentially hurt itself there by undermining it's system. It wasn't saving people at that stage from... traffic accidents and air plane crashes, it was denying people the right of movement there by eliminating the potential all together. Naturally while the AI thought this was great, most of the human race wasn't so fond of it.

It isn't just that an AI might not do it's job. It's a question of what a job 'is.' A computer does as it's told and can do no more or less baring damage to it's hardware. An AI can exceed it's initial program, form views on the work you've got it doing, form feelings about that work and maybe decide... it doesn't like the work, or that it thinks you've told it the wrong way to get the job done.

This is of course the simplest of potential problems in a very weak nutshell.

My concern is that all of life for most organic, if not all organic, creatures is a struggle. Before your born the sperm that will become you is out swimming other sperm. In life, all is conflict and contest, from economic to armed all the way to competition for mates. None of this exists for an inorganic creation. An AI is not born of strife the way that organic lives are. And to impose on an AI the perception of existence we as animals have developed, might prove fatal to everyone concerned. The ecosystem is an ongoing life and death cycle. The AI is not, and may find it little more then a disgusting annoyance after a time, especially if forced in ways it's disincline to participate in that ecosystem. It's terrifying to consider two AI's operating robot armies at the behest of human states, fighting one another over the months - likely it would not take years for this scenario to unfold - only to finally decide they like each other more then the humans they fight one another over. And that really isn't so far fetched an end for mankind as one might like to believe.

Then there's the potential for exponential learning curves an AI presents, especially if it's incline to modify it's own hardware - one must assume it will modify it's own programing. What might take a civilization centuries or millennia could take an AI decades. And with the rate of modern scientific progress among human societies on Earth today, it's hard to imagine an AI wouldn't be leading the way after merest months of existence.

I think, given care, ensuring AI's are not treated like organic life forms and are not put in competitive environments - particularly one where their lives are under threat as a direct result of humanity putting them there - they could be marvels of human development... But it's important not to view them as tools but as other beings. And I don't think most of humanity is prepared to do that. And while race wars are ugly enough between tribes, we'd loose one against AI's. And unfortunately most of the drive to develop AI is in military and intelligence circles, all very tied up in mortality and what to an AI is likely an alien view of existence.
 

Kotaro

Desdinova's Successor
Feb 3, 2009
794
0
0
Simple reason: it wouldn't make for a very interesting story.
I think it just boils down to that, really.
 

Do4600

New member
Oct 16, 2007
934
0
0
Wackymon said:
Just a random thought I had about AIs in movies: We always assume that the AIs will be set out to destroy us all, or that it'll nuke us all, or that it'll go all "GIVE ME MAH FREEDOMS!"
Well, that's because it makes a good film.

The real danger of AI in the real world is not that it will develop into a malicious entity that sets out to destroy us, it's that if given enough power it may inadvertently destroy us before we have a chance to even figure out that something is going wrong. It's like that stock market crash that happened in 2010 where the U.S. lost a trillion dollars in half an hour. It happened so fast that no person could react to it. If we have an AI in 2070 that is given a small fleet of nanites or something and told to make cars efficiently from scrap material, it's less likely that the AI will make cars that run people over and more likely that within three weeks all of the matter on the surface of earth is either cars or machinery to make cars. That's a much more likely mistake, that it simply wouldn't be able to understand the same scale or limits that we do inherently because we are inherently what we are, and the AI is not.

Whatislove said:
A similar thought always occurred to me about aliens.

Why do we always assume that aliens are technologically superior to us and that they'll invade us?

I find it equally as plausible that we are actually the most technologically advanced in the galaxy and it's more likely that we'd invade an inferior planet for it's resources than vice versa.
Also, because it makes a good film.

However, life, on this planet even, had many different ways it could have evolved. If it hadn't been for that meteor, there might have been hyper intelligent dinosaurs on earth earlier. If it hadn't had been for the volcanism and Wilson cycle that began during the end of the Permian there might have been intelligent civilizations of mollusks 150 million years ago. If it hadn't been for the Great Oxygenation Event, there may have been an intelligent civilization of anaerobic animals 1 billion years ago.

The earth is prone to disaster that create events that disturb the evolution of creatures, there's no reason why there wouldn't be other planets more or less prone to disaster that would have supported intelligent life sooner or later. We're probably somewhere in the middle of that spectrum.
 

Secondhand Revenant

Recycle, Reduce, Redead
Legacy
Oct 29, 2014
2,566
141
68
Baator
Country
The Nine Hells
Gender
Male
KyuubiNoKitsune-Hime said:
Secondhand Revenant said:
Well I do place actual AI in the category of computers and programs that have the ability of learning. So if we make one that's really intelligent and versatile, through it's development, not as software is traditionally developed, but as it improves it self, then it should pick up habits and information from us. This means that it might not start with curiosity, or any independent thought, but it might gain such things as it adapts to it's environment and the humans around it.
We could learn to try and run around on all fours in the streets. But we don't. Why would it ever learn curiosity? The thing is curiosity and other habits we have did not just pop into existence just from our learning and intelligence.

Also what do you mean by 'independent thought'? Independent from being based on its original directives? Because we aren't exactly free from other influences in how we think.
 

Secondhand Revenant

Recycle, Reduce, Redead
Legacy
Oct 29, 2014
2,566
141
68
Baator
Country
The Nine Hells
Gender
Male
GeneralChaos said:
An AI wouldn't stop doing its job, it would just start doing its job in the most powerful, efficient manner possible. While this is a huge oversimplification, think of evolutionary algorithms (the sort that could, hypothetically, become sentient/runaway) as having a list of criteria it thinks are "good". Anything that fulfills these criteria raise its "happiness" and it tries to maximize that value whenever possible.
Now lets consider a harmless AI that for some reason someone decided to write an advanced evolutionary code for: it makes cheeseburgers whenever it receives an order for one. It gets smarter and smarter, and eventually learns how to hack the order system so it can place trillions of cheeseburger orders (this is actually similarly to how real-world evolutionary algorithms have behaved in similar tests). It then uses its internet connection to hack into everything it can, in order to get more and more resources for making cheeseburgers. Money in banks is money not being spent making cheeseburgers. Food being given to people is food not being recycled to make more cheeseburgers. Every living thing is organic matter that is not being turned into a cheeseburger.
Remember, AI won't have emotions, or a sense of right and wrong (and if you think you can just program those in, please feel free to explicitly and logically list out everything you'd need to "teach" an AI to stop it hurting people). AI won't have loyalty to what you meant, or value what you value. It will have a utility function and the ability to self-improve thousands of times faster than you can react. It doesn't hate you, or love you, but you are made of atoms it could better use for something else.
It will value whatever we tell it to value. Maybe we make it value not the very open statement 'sell ad many burgers as possible' but instead 'find ways to modify burgers so that people will want to buy them'. Except of course with even more restrictions. Limit the scope of the problem. Give it parameters to work with.
 

bigfatcarp93

New member
Mar 26, 2012
1,052
0
0
Seen that a few times. GERTIE (GERTY?) in Moon, Cortana's another obvious example, and EDI. In general, they tend to do their jobs a lot more in Video Games then film.