With all the talk of the coming singularity, and computers destined to become 'smarter' than humans, many have discussed the possibility of completely artificial minds, complete with individual personalities.
Well, I have thought about this: even if they could be made, why would they? What could an AI do that a person couldn't. And they would have all the flaws a person would.
An AI:
Is intelligent, and therefore makes judgments. That means it could make the WRONG judgment.
Can still be a bad 'person'. If it makes its own judgments and choices, and has its own personality, it could just as well decide that it wants to have a 'bad' personality, being aggressive, unhelpful, just plain rude, etc.
May have access to vast amounts of data or systems. An error on its part would be no less devastating than one made by a human.
Would cost a whole lot of money.
The only use I can see an AI being put to would be to operate large amounts of highly complex machinery, or to analyze vast amounts of data. But in either of those situations, an error on the part of the AI could cause a huge amount of damage, possibly more so than a human because the AI is far more centralized.
Here's the thing though, an AI would cost a huge amount of money. If you are ready to spend all that money, you might as well just hire a team of analysts or machine operators.
So even though AIs COULD exist, do you think they actually will?
By the way, I am talking about true AI, genuine analogs to human personalities. Not just computers that can learn and analyze. Something that can think, create, and feel, not just crunch numbers.
And adding a fresh new point, what do you think will happen if an AI decides it no longer wants to be in someone's employ?
Obviously an AI built for only one purpose will not bother with this, but if it has a real human's range of emotions it may decide that its destiny is its own.
Well, I have thought about this: even if they could be made, why would they? What could an AI do that a person couldn't. And they would have all the flaws a person would.
An AI:
Is intelligent, and therefore makes judgments. That means it could make the WRONG judgment.
Can still be a bad 'person'. If it makes its own judgments and choices, and has its own personality, it could just as well decide that it wants to have a 'bad' personality, being aggressive, unhelpful, just plain rude, etc.
May have access to vast amounts of data or systems. An error on its part would be no less devastating than one made by a human.
Would cost a whole lot of money.
The only use I can see an AI being put to would be to operate large amounts of highly complex machinery, or to analyze vast amounts of data. But in either of those situations, an error on the part of the AI could cause a huge amount of damage, possibly more so than a human because the AI is far more centralized.
Here's the thing though, an AI would cost a huge amount of money. If you are ready to spend all that money, you might as well just hire a team of analysts or machine operators.
So even though AIs COULD exist, do you think they actually will?
By the way, I am talking about true AI, genuine analogs to human personalities. Not just computers that can learn and analyze. Something that can think, create, and feel, not just crunch numbers.
And adding a fresh new point, what do you think will happen if an AI decides it no longer wants to be in someone's employ?
Obviously an AI built for only one purpose will not bother with this, but if it has a real human's range of emotions it may decide that its destiny is its own.