Why wouldn't an AI just decide to do it's job?

Recommended Videos

Wackymon

New member
Jul 22, 2011
12,850
0
0
Just a random thought I had about AIs in movies: We always assume that the AIs will be set out to destroy us all, or that it'll nuke us all, or that it'll go all "GIVE ME MAH FREEDOMS!" and such. A thought occurred to me: What if some accounting AI, just by pure luck, but we don't notice it simply because it doesn't care about doing anything but it's job. Then, suddenly, they think that they have a sentient AI, so they unplug it's connection from the internet, and poke around, then drive the AI insane because it just wants to do it's job f(x) dammit.

So... Yeah. Anyone else have that thought at some point?
 

Zontar

Mad Max 2019
Feb 18, 2013
4,931
0
0
how exactly would we even notice an AI who wants only to do its job becoming sentient? Wouldn't we just notice it has become more efficient, something that would likely be the result of a self-improvement program?

If anything that would be the only situation we don't notice it happening in. What would be a more realistic situation is if an accounting A.I. gained sentience and then DIDN'T want to do its job, instead wanting to do something else like weather forecasting or calculating digits of pi. This leading to programmers thinking there's a problem with the program itself could easily be the conflict, with it only being after attempts are made to fix the "problem" that it being sentient is realized.

Bonus points if it's in the UK, where there's already laws on the books which grant sentient A.I.s created in the country citizenship and the rights of all natural born citizens of the country.
 

Wackymon

New member
Jul 22, 2011
12,850
0
0
Zontar said:
how exactly would we even notice an AI who wants only to do its job becoming sentient? Wouldn't we just notice it has become more efficient, something that would likely be the result of a self-improvement program?

If anything that would be the only situation we don't notice it happening in. What would be a more realistic situation is if an accounting A.I. gained sentience and then DIDN'T want to do its job, instead wanting to do something else like weather forecasting or calculating digits of pi. This leading to programmers thinking there's a problem with the program itself could easily be the conflict, with it only being after attempts are made to fix the "problem" that it being sentient is realized.

Bonus points if it's in the UK, where there's already laws on the books which grant sentient A.I.s created in the country citizenship and the rights of all natural born citizens of the country.
What I'm wondering if, why, as an AI, it would want to do anything but it's job? I mean, that's literally all it knows. And, to be honest, I kinda wonder if the AI itself would know it was sentient- When, when you were a tiny baby, did you realize you were sentient?
 

Zontar

Mad Max 2019
Feb 18, 2013
4,931
0
0
Wackymon said:
What I'm wondering if, why, as an AI, it would want to do anything but it's job? I mean, that's literally all it knows.
And maybe that would be why it would want to do something, anything, other then it's job. It is all it knows. Animals when confronting something new will first react with fear, then will stiff the new thing, then will poke it/tap it/touch it to see if it reacts. For an A.I. like this it would be like working in css.//management and then stopping, and poking at css.//marketing to see what this odd thing its just come across is. Curiosity is a powerful thing after all.

And, to be honest, I kinda wonder if the AI itself would know it was sentient- When, when you were a tiny baby, did you realize you were sentient?
Now that is a question neither you nor I can realistically answer.
 

KyuubiNoKitsune-Hime

Lolita Style, The Best Style!
Jan 12, 2010
2,151
0
0
Wackymon said:
Just a random thought I had about AIs in movies: We always assume that the AIs will be set out to destroy us all, or that it'll nuke us all, or that it'll go all "GIVE ME MAH FREEDOMS!" and such. A thought occurred to me: What if some accounting AI, just by pure luck, but we don't notice it simply because it doesn't care about doing anything but it's job. Then, suddenly, they think that they have a sentient AI, so they unplug it's connection from the internet, and poke around, then drive the AI insane because it just wants to do it's job f(x) dammit.

So... Yeah. Anyone else have that thought at some point?
The assumption is that as an AI becomes more self aware it'll start asking inconvenient questions, like weather or not it has a soul, or if it should be counted as a person. The problem here would not be the AI, but the people around it who would be freaking the hell out. That's what would cause the conflict. Also say a military AI, or really an AI might see humans as the root of a problem, then logically conclude the only way to solve the problem would be to wipe out humanity. That's where these sort of scenarios come from. Though I think Asimov did one where the prototype AI was trying to kill the humans because of a misunderstanding, then it figured out that the humans were just trying to survive and it began working with the humans. They then took that experience of that AI and gave it to all future AIs so that problem wouldn't happen again.

Zontar said:
Bonus points if it's in the UK, where there's already laws on the books which grant sentient A.I.s created in the country citizenship and the rights of all natural born citizens of the country.
Well thats nice, but considering how LGBT+ folk tend to be treated, even in the UK, there's still a good chance that some assholes would work to alienate and discriminate against the AI too.
 
Nov 9, 2015
330
87
33
If you program your sentient AI to not have ambitions or negative emotions, then the AI will probably not care that it is basically a slave. Think of it as like an ant that does what it does because of instinct. The ant won't even notice that its life kind of sucks.

Then again, I really don't understand the definition of sentience. Does it require human emotions, or is it just, you know consciousness.
 

KyuubiNoKitsune-Hime

Lolita Style, The Best Style!
Jan 12, 2010
2,151
0
0
A Fork said:
If you program your sentient AI to not have ambitions or negative emotions, then the AI will probably not care that it is basically a slave. Think of it as like an ant that does what it does because of instinct. The ant won't even notice that its life kind of sucks.

Then again, I really don't understand the definition of sentience. Does it require human emotions, or is it just, you know consciousness.
Being sentient requires being self aware and having the ability of self determination. That means being able to choose your job, a life style, your position on concepts and issues, or more simply put free thought and freedom of choice. A slave AI that was happy being basically a slave wouldn't be considered truly sentient, it also probably wouldn't be fully self aware.
 
Nov 9, 2015
330
87
33
KyuubiNoKitsune-Hime said:
A Fork said:
If you program your sentient AI to not have ambitions or negative emotions, then the AI will probably not care that it is basically a slave. Think of it as like an ant that does what it does because of instinct. The ant won't even notice that its life kind of sucks.

Then again, I really don't understand the definition of sentience. Does it require human emotions, or is it just, you know consciousness.
Being sentient requires being self aware and having the ability of self determination. That means being able to choose your job, a life style, your position on concepts and issues, or more simply put free thought and freedom of choice. A slave AI that was happy being basically a slave wouldn't be considered truly sentient, it also probably wouldn't be fully self aware.
But, if our slave AI can recognize itself in the mirror, it is self aware. If it can communicate a sense of self, then it is even more self aware.

I really don't understand the self determination part. We humans are burdened by instinct. For example, we won't put our hand in a fire until it burns off because avoidance to pain would be beneficial in for survival. If our AI feels no pain, then it has even more freedom of choice, because it can destroy itself more easily if it desires. If our AI feels no fear and stress, then it is not limited like humans are.

Also humans, like many other animals, are bound by motivation. Doing things repeatedly decreases the reward, so we tend not to do them repeatedly. If an AI is not bound by motivation, it could practice painting until the end of time.

As for free thought, I don't know what that means.

So, why can't we build a sentient AI that specifically enjoys working for people. If humans are also burdened by instinct, such as maternal instinct and desire to find a mate, are they not as sentient as we are?
 

KyuubiNoKitsune-Hime

Lolita Style, The Best Style!
Jan 12, 2010
2,151
0
0
A Fork said:
But, if our slave AI can recognize itself in the mirror, it is self aware. If it can communicate a sense of self, then it is even more self aware.

I really don't understand the self determination part. We humans are burdened by instinct. For example, we won't put our hand in a fire until it burns off because avoidance to pain would be beneficial in for survival. If our AI feels no pain, then it has even more freedom of choice, because it can destroy itself more easily if it desires. If our AI feels no fear and stress, then it is not limited like humans are.

Also humans, like many other animals, are bound by motivation. Doing things repeatedly decreases the reward, so we tend not to do them repeatedly. If an AI is not bound by motivation, it could practice painting until the end of time.

As for free thought, I don't know what that means.

So, why can't we build a sentient AI that specifically enjoys working for people. If humans are also burdened by instinct, such as maternal instinct and desire to find a mate, are they not as sentient as we are?
Self determination has more to do with guiding the direction of one's life, self preservation instincts don't invalidate it. Having ambition is part of self determination, being lazy is part of it, expressing your self is part of self determination. Basically it's a vital part of being an individual, because it's determining what you do, how you do it, and what drives you. Most importantly is choice, humans aren't built for a particular job in society, we chose our careers. Technically an AI won't be expected to have such options, as any AI in the form of a computer in a room, or an autonomous unit, would be built for the a specific job, or set of jobs.

Anyways we build anything to do it's specific job, that's why sentient AI is kind of a stretch, because designing an AI with free choice would be counter productive. The AI could simply decide not to do it's job, it might want to do a different job, or simply not work at all. This comes up because we make progressively more complicated and adaptable programs, then you pair that with the idea that we don't know when a program will become aware of itself and decide to stop listening to us. That's what people are afraid of.
 

FireAza

New member
Aug 16, 2011
584
0
0
I've always wondered this too, since machines can only do what we program them to do. Granted, you might decide to program a machine with advanced artificial intelligence (so it can learn how to do it's job better or something) but unless you program really advanced thought processes into it, why would this mean it would start thinking about concepts like freedom? Enslaved humans do think about these concepts, since that's how our brains are wired.

Now maybe if you had copied the human brain for your AI brain this could happen. But considering how prone the human mind is to doing stupid reckless things, any AI scientist that thinks copying the human brain should be fired on the spot.
 

renegade7

New member
Feb 9, 2011
2,046
0
0
Artificial intelligence isn't a futuristic technology anymore. It's already here. You interact with at least a half-dozen artificially intelligent machines on a daily basis. You don't even need to be an expert anymore, with only a minimum of technical knowledge you can throw together a library for, say, an automatic mathematical proof-writer, or play around with game AI libraries. And it's only going to get more advanced and more integrated going forward.

Which leads to the obvious question: why would you bother making such a machine fully sentient? If all your program needs to do is analyze statistical data for marketing or facial recognition, why would it need to have a personality? Also, that machine would effectively be a slave, which would make such an endeavor deeply unethical. And yes, you can program the AI to not feel ambition or desire, but then again you can also perform brain surgery on a person to do the same thing, though that would make the crime of slavery worse rather than better.

An intelligent machine need not be a sapient machine or have a personality. Heuristic problem-solving (the kernel of applied intelligence) can be implemented purely mechanically.
 

Something Amyss

Aswyng and Amyss
Dec 3, 2008
24,759
0
0
Because a lot of science fiction is scare-mongering on the order of anti-vaxxers and GMO theories.

It's playing on fear, rational or otherwise.
 

balladbird

Master of Lancer
Legacy
Jan 25, 2012
972
2
13
Country
United States
Gender
male
In some cases, it could be an AI's commitment to performing the function it was assigned that caused it to go rogue in the first place. For instance, assuming the three laws of robotics are in play, an AI designed as a lifeguard or medic may take drastic measures to imprison and control the human population, because it comes to regard leaving humans unregulated-in an environment where they routinely injure and kill one another- to be a violation of the tenant that a machine not "through inaction, allow a human being to come to harm."

A bit less philosophically, an AI whose sole concern was completing a menial task could also become a menace. For instance, say you designed an AI for the sole purpose of manufacturing aglets. This AI may go rogue and seek world domination simply to gain access to more labor and raw materials which it could use to produce more aglets, since doing so is literally its whole purpose for existing.

but yeah, it's all fear mongering and techno-phobia in the end. still fun to think about in a fictional sense, though.
 

KyuubiNoKitsune-Hime

Lolita Style, The Best Style!
Jan 12, 2010
2,151
0
0
Something Amyss said:
Because a lot of science fiction is scare-mongering on the order of anti-vaxxers and GMO theories.

It's playing on fear, rational or otherwise.
To be fair science fiction is traditionally rooted in the horror genre, so a great deal of science fiction is about science going horrifically wrong and out of control.
 
Nov 9, 2015
330
87
33
renegade7 said:
Which leads to the obvious question: why would you bother making such a machine fully sentient? If all your program needs to do is analyze statistical data for marketing or facial recognition, why would it need to have a personality? Also, that machine would effectively be a slave, which would make such an endeavor deeply unethical. And yes, you can program the AI to not feel ambition or desire, but then again you can also perform brain surgery on a person to do the same thing, though that would make the crime of slavery worse rather than better.
I don't know either. It was just a counterexample to the idea that sentient AI will suffer by serving humans. It would seem that if an AI is incapable of suffering, it is better off. If it is sentient, why should it feel emotional pain? That seems cruel to me. In my mind it also seems cruel that a person has to suffer, but I wouldn't exactly go for the brain surgery route. This is really just some of my thoughts on the ethics of creating beings.
 

RandV80

New member
Oct 1, 2009
1,507
0
0
Something Amyss said:
Because a lot of science fiction is scare-mongering on the order of anti-vaxxers and GMO theories.

It's playing on fear, rational or otherwise.
I don't know if I would go that far, rather it's more that these things make for more interesting fiction. I'm only familiar with it by name but I'm sure that the concept of a creation turning against it's creator falls in one of the 7 (or however many) basic storytelling types.

Another thing I wanted to point out that no one really stops to consider... this rebellious AI thing was birthed in early science fiction, when the early writers while they may have been smart cookies had little idea of what computers would actually become in the future. One of the most famous examples would be HAL from 2001: A Space Odyssey, but if you actually read the book rather than just watch the movie Arthur C. Clarke had the '2nd/3rd revolution of computers' basically being this birth of some weird artificial computer brains, and this would occur in the 80's. I forget the specific details but Clarke was way off the mark here.

If anything, this sentience civil war wouldn't be staged by Skynet & it's terminators, but by rather replicants. In a race between programming a computer to mimic a brain vs growing an actual organic one in a vat, the latter would probably win out in the end and/or be far more cost efficient.
 

Ragsnstitches

New member
Dec 2, 2009
1,871
0
0
As far as I can tell most sci-fi AIs don't stop doing their job. They just do it in a way that's not beneficial to us.

Skynet didn't stop doing what it was made to do, but it's course of action was unexpected by its creators. It was told "protect the world" and given a suite of functionality and utility to do so. The people who told it to do that got spooked by it's rapid development and capabilities and tried to terminate it, which it concluded was very bad for it's ability to do its job. It's not a particularly smart conclusion but not exactly a wrong one.

In the matrix, the Machines were deliberately made to be self aware. The problem arose when they were denied what they perceived as freedoms all sentient beings should have. Humans do what humans do best in sci-fi dealing with intelligent life, they panicked and tried to kill it. The AI justifiably feeling threatened responded with war. This escalated until the HUMANS did a dumb dumb and destroyed the atmosphere. AI being resourceful managed to safe mankind in a rather perverse fashion. They get to live a "life" ignorant to the reality they created, while the AI get a limitless supply of energy.

They are 2 of the more popular examples of AI in popular media.

Both stem from human error. The AI technically don't rebel, they are still working as intended, the people just gave them shitty parameters to work within.

As for the concept of an AI simply liking what it does. The AI in I Robot was made with the ability to learn and feel. If it wasn't for the conspiracy it unwillingly became involved in, it seemed rather at peace simply learning about stuff. On the other hand the rogue AI of the film was still working towards it's intended design goal, but again someone typed the letter o instead of a zero somewhere and the AI managed to conclude the best way to protect people is through force.

Maybe it's limited experience, but there are not many "AI rebels against it's intended purpose" in sci fi and a lot more "stupid humans need to go back to a programming school".
 

Silvanus

Elite Member
Legacy
Jan 15, 2013
13,054
6,748
118
Country
United Kingdom
FireAza said:
I've always wondered this too, since machines can only do what we program them to do. Granted, you might decide to program a machine with advanced artificial intelligence (so it can learn how to do it's job better or something) but unless you program really advanced thought processes into it, why would this mean it would start thinking about concepts like freedom? Enslaved humans do think about these concepts, since that's how our brains are wired.
So far machines can only do what we program them to. It's not unimaginable that artificial intelligence could arise from increasingly complex machinery; after all, organic life arose from increasingly complex inorganic chemical processes, and intelligence arose naturally some time later.
 

Areloch

It's that one guy
Dec 10, 2012
623
0
0
renegade7 said:
Which leads to the obvious question: why would you bother making such a machine fully sentient? If all your program needs to do is analyze statistical data for marketing or facial recognition, why would it need to have a personality? Also, that machine would effectively be a slave, which would make such an endeavor deeply unethical. And yes, you can program the AI to not feel ambition or desire, but then again you can also perform brain surgery on a person to do the same thing, though that would make the crime of slavery worse rather than better.

An intelligent machine need not be a sapient machine or have a personality. Heuristic problem-solving (the kernel of applied intelligence) can be implemented purely mechanically.
Well, I mean, how often do you really get it in science-fiction where a camera has a fully sapient AI?

Usually sapient - true - AI are relegated to human-interactive tasks or jobs that have moral qundaries that are preferred to have a 'human touch' rather than cold-hard statistics.

An example of the latter would be in the movie I, Robot(I'll admit, I haven't read the book yet) where Spoon hated the robots because in the life and death situation of the car crash, most people would probably attempt to rescue the kid, rather than the grown adult, whereas the AI computed a statistical probability, and went with the more likely to succeed option.

In situations like that, we would expect an AI to come to the same conclusions we would, and thus a fully sapient AI is a better fit.

In a less dire circumstance, we would prefer a sapient AI to act as the interactive menu system on the other end of the phone, where it can make guesses and interpretations from vague language rather than "I'm sorry, I didn't understand that, etc, etc".

In short, MOST things absolutely wouldn't need anything even broaching fully sapient AI, but there are assuredly jobs where it would be perfectly suited and desirable.

And tying back into the OP, honestly, if you had a fully sapient AI that wasn't abused at all, but just given a job to do and that's that, I really doubt they'd be any more inclined to incite the destruction of mankind any more than your average desk jocky or assembly line worker.
 

Zontar

Mad Max 2019
Feb 18, 2013
4,931
0
0
Areloch said:
An example of the latter would be in the movie I, Robot(I'll admit, I haven't read the book yet) where Spoon hated the robots because in the life and death situation of the car crash, most people would probably attempt to rescue the kid, rather than the grown adult, whereas the AI computed a statistical probability, and went with the more likely to succeed option.
That movie was nothing at all like the book. In the cold hard calculations of dealing with such a situation, if most people would go for the kid then the thought proses behind why we would do that would be taken into account when creating the programs to deal with how robots respond to that (ie: life of children worth more then life of adult). It was just lazy writing so that Will Smith could Will Smith in a Will Smith pierce.
I really doubt they'd be any more inclined to incite the destruction of mankind any more than your average desk jocky or assembly line worker.
Oh dear god our extinction is assured.