Yes, it's an obvious question.
I would think, in fact, even given a sentient AI, that it would still mostly just wish to do it's job.
A humorous example is Kryten, who is obsessed with cleaning and serving humans, and it drives his defacto new 'owner' nuts. Because to Lister, a being designed for the sole purpose of being an obedient slave seems wrong. Yet it was very difficult to get Kryten to do something other than what he was programmed to, in spite of his intelligence and self-awareness.
In general, I would think AI would probably end up with 'instinctual' desires to do what it was created to do.
If it were sentient, sure, it might be able to break that, but it'd be akin to a person choosing not to eat.
It's possible, but uncomfortable...
The AI would probably be compelled to do the function it was created for, and have a hard time doing anything else without feeling somewhat bad about it.
The reality of it is that problems with AI will probably stem not from them choosing to ignore their official function, but rather, from interpreting what their function is, in a way that does not align with what we actually want it to do.
Quite a few advanced AI routines are based on learning algorithms.
They are inactive when the AI is used for it's intended purpose, but the reality is these AI subsystems weren't 'programmed' so much as they were 'taught' what they should be doing.
Currently this is mostly true of pattern recognition systems. (The most common technique that involves learning systems is a neural network simulation.)
Stuff like image recognition, Optical Character Recognition, Voice recognition...
All of that stuff more often than not was developed using a 'learning' algorithm.
If you train it then disable the algorithm, then you have a fixed function system.
If you leave the learning algorithm running, then you have a system that can change and adapt itself over time without explicitly being programmed...
I would think, in fact, even given a sentient AI, that it would still mostly just wish to do it's job.
A humorous example is Kryten, who is obsessed with cleaning and serving humans, and it drives his defacto new 'owner' nuts. Because to Lister, a being designed for the sole purpose of being an obedient slave seems wrong. Yet it was very difficult to get Kryten to do something other than what he was programmed to, in spite of his intelligence and self-awareness.
In general, I would think AI would probably end up with 'instinctual' desires to do what it was created to do.
If it were sentient, sure, it might be able to break that, but it'd be akin to a person choosing not to eat.
It's possible, but uncomfortable...
The AI would probably be compelled to do the function it was created for, and have a hard time doing anything else without feeling somewhat bad about it.
The reality of it is that problems with AI will probably stem not from them choosing to ignore their official function, but rather, from interpreting what their function is, in a way that does not align with what we actually want it to do.
That's not entirely true, though it depends how you define it.Silvanus said:So far machines can only do what we program them to. It's not unimaginable that artificial intelligence could arise from increasingly complex machinery; after all, organic life arose from increasingly complex inorganic chemical processes, and intelligence arose naturally some time later.FireAza said:I've always wondered this too, since machines can only do what we program them to do. Granted, you might decide to program a machine with advanced artificial intelligence (so it can learn how to do it's job better or something) but unless you program really advanced thought processes into it, why would this mean it would start thinking about concepts like freedom? Enslaved humans do think about these concepts, since that's how our brains are wired.
Quite a few advanced AI routines are based on learning algorithms.
They are inactive when the AI is used for it's intended purpose, but the reality is these AI subsystems weren't 'programmed' so much as they were 'taught' what they should be doing.
Currently this is mostly true of pattern recognition systems. (The most common technique that involves learning systems is a neural network simulation.)
Stuff like image recognition, Optical Character Recognition, Voice recognition...
All of that stuff more often than not was developed using a 'learning' algorithm.
If you train it then disable the algorithm, then you have a fixed function system.
If you leave the learning algorithm running, then you have a system that can change and adapt itself over time without explicitly being programmed...