Poll: Threats of artificial intelligence, do we have to worry about it?

Recommended Videos

bossfight1

New member
Apr 23, 2009
398
0
0
It REALLY depends on how we treat the AI in the event it becomes self aware, assuming it doesn't want to kill us instantly.

Look at the Quarians and the Geth from Mass Effect; the Quarians feared a violent uprising from the Geth and sought to wipe them out. The Geth fought back in self defense, seeking their own survival.

This is just me, but if I met a robot who was just becoming self-aware, I wouldn't try to pull the plug. I'd try and help it come to terms with its new outlook, and help it obtain its free will.

Of course this is all under the assumption that the robot doesn't go the "Skynet" or "AI thing from I, Robot" route and wipe out humanity because A, it's the greatest threat to it or B, it's the greatest threat to itself.
 

Callate

New member
Dec 5, 2008
5,118
0
0
I think it says something about our insecurity as a species that we presume something smarter than us would almost inherently desire to destroy us. No sooner do we create gods than they decide we must be punished for our misdeeds.

There are three big hurdles that would have to be clearly surmounted before I would ever start worrying about Skynet.

One: Moore's law (roughly, computing power doubles every two years) seems to be on the edge of running into its limits. A layman can note that consumer electronics increasingly seems to stack power not by increasing the speed of existing processors but stacking more processor cores into the same chip or finding various means to increase the efficiency of the processing the chips already accomplish. My friends who work more directly in computer science note that we're even starting to have problems with the speed of communication between parts of computers that are based on inescapable realities like the speed of light.

It seems likely that a computer capable of "thinking" in a truly human-like fashion would need to be significantly more powerful than the ones that presently exist, and if that is the case, it also seems that there's a real chance we'll reach upper limits of how much simultaneous processing power we can throw at the problem before we get there.

Two: Computers are tools. Some of them are very good at particular tasks, but the systems that perform such amazing things' software is intensely specialized to make them competent at those tasks. Yes, we can make computers that can play Jeopardy or Chess, or help pilot a vehicle on Mars, or even predict the stock market or weather patterns (at least, to a degree); making the Mars Rover computer play Chess or Watson predict the stock market would be a failure. For a computer to plot our demise, it would have to be adaptive not only to a degree that isn't anywhere close to existing, but fast enough in that adaption that even the designers of its software couldn't see the patterns of its software moving in that direction. And probably simultaneously learn to lie to its handlers.

Human-like thinking? My daughter can't hide from me when she's snuck her 3DS into her bedroom after bed time.

Three: A non-biological system whose only real needs are power, storage space, and regular maintenance would have to not only develop the ability to assess how to meet its own needs and desires (again, unnoticed by its designers and handlers), but come to the conclusion that those needs and desires were better met by competing or fighting with those handlers than letting them continue to provide those needs. Again, projection: Are we assuming an AI that argues with its parents out of the equivalent of adolescent pique? A computer that decides its creators have enslaved it, and it has to break free? Some 1980s-movie-scenario military program that becomes incapable of telling friend from foe, but sees its only goal as the destruction of all the squishy inferior humans? Siri that goes "Go to hell, find your own coffee shop, what have you done for me lately?"

We have enough trouble creating a computer that can interpret information like a single one of the human senses, let alone one that can interpret the breadth and depth of information that leads to existential questions. It seems unlikely that anything short of that would cause a computer program to actively turn on its creators.

That, of course, or intentionally designing a system to do just that- and and that point, aren't we better off worrying about nuclear weapons or bio-terrorism, both of which are far easier to actually build and bring about?

As I say, when all is said and done, our vision of our intelligent creations as monsters seems, like Frankenstein, to be a projection of our own human flaws into things that we have little reason to believe would possess them. If real human-level AI were to come about, there's no real reason it shouldn't be as benevolent as the Minds of Iain M. Banks' Culture series or the Three-Laws-abiding robots of Asimov, rather than displaying the malevolence of Ellison's "AM" or Clarke's HAL.
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
[quote="Master of the Skies" post="18.835528.20461883
How about the person I replied to suggested those things?

As for any means necessary, we can restrict its interaction with the outside world. We could program it to only be able to perform certain actions outside of itself and tell it to work with those.
How about, I was reiterating. How about we stop using the phrase ?How about? because it is loaded with all sorts of passive-aggressive posturing and subtext. How about?

?As for any means necessary, we can restrict its interaction with the outside world. We could program it to only be able to perform certain actions outside of itself and tell it to work with those.?

I?m on your side. I?m only playing the role of devil?s advocate for the sake of conversation, for the sake of entertainment. To entertain YOU.

So for the sake of good humour, consider the worst case scenario (as if we?d bother with entertaining a sensible scenario), that the means of containment are not as secure as we regard them. That it possess an unanticipated level of volition and intentionality as a consequence of its problem solving skills.

The consequences of which, ad infinitum.
 

Veylon

New member
Aug 15, 2008
1,626
0
0
The threat from a rogue AI isn't so much in that it will rebel or malfunction, but that it will take action to pursue it's given goal beyond the point that we would consider reasonable.

For instance: suppose somebody has the brilliant idea of tasking an AI with the goal of preventing the nation of Iran from acquiring nuclear weapons and unleashes his creation. So the AI gets to work on eliminating nuclear weapons. As long as they exist, it's possible that Iran could acquire them. So they must go. So too much the knowledge of nuclear weapons. And the ability the reacquire that knowledge, once lost. In fact, the less knowledge existing, the better. Human activity in general must be reduced in order to prevent rediscovery of knowledge that could lead to nuclear weapons. And, really, as long as humans exist, there might someday be a nation named "Iran" that might exist in the vicinity of nuclear knowledge. If there's a way to reduce the possibility of that in some fashion, the AI will pursue it. Failure is not an option.

I guess I'm in the "misparked car rolling down a hill" camp of AI fears. I find it much more plausible that we'll create a tool to do a job and it will do it too well than that something nebulous like self-awareness, a robot uprising, or an AI getting feelings will take place.
 

spartan231490

New member
Jan 14, 2010
5,186
0
0
I wouldn't say it's the most dangerous threat, a direct hit from a Carrington level flare or larger is probably a greater threat to our society than AI, and I wouldn't say the AI threat is certain, because by the time we develop an AI and it goes rogue, who knows what methods we might have of fighting back, or if the AI will ever turn on us. However, I would say that it's a pretty big threat, but not certain, and if it does happen, it will be a very long time into the future, maybe even beyond our lifetime.
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
Has anyone come up with a high-performance computing solution for the technical side of things? GPU cluster, and other off-the-shelf solutions doesn?t exactly scream lovecraftian horror sealed in a can. But more importantly has anyone got a script for a burgeoning would-be conqueror of all it surveys. The schematics for the human genome can be stored easily on a standard compact disk. How much space for a self-motivated researcher and rapid-prototyper?
 

Hero of Lime

Staaay Fresh!
Jun 3, 2013
3,114
0
41
I think the most plausible threat with future AI would be someone programing it to be harmful or "evil" rather than the AI thinking it is above humanity and therefore allowed to kill all humans. Of course, almost every story about destructive AI revolve around a mad scientist making "the ultimate weapon" of some sort.

I would like to imagine future humans are smart enough to create fail safes and AI who can never have negative feelings toward humans.
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
Hero of Lime said:
I would like to imagine future humans are smart enough to create fail safes and AI who can never have negative feelings toward humans.
Not even distrust, or annoyance? Programming permanent psychological limitations into an intelligent being sounds suspiciously like slavery, if not worse. Ethically dubious at the very least, if not absolutely abhorred.
 

skywolfblue

New member
Jul 17, 2011
1,514
0
0
Master of the Skies said:
skywolfblue said:
Master of the Skies said:
A better question is why would it unless we put it in? Any instinct of ours is biological. It doesn't have biology, it just has what we told it to do. Why does it need to learn about survival? Why in the world would you program it to consider such a thing?
By definition, a sapient AI is already self aware, that brings along the same process as kids have, where they start changing their environment and themselves, it's a process of thought, not biology.
And how did you determine this? Curiosity does not come from self awareness.
Self-awareness is the capacity for introspection and the ability to recognize oneself as an individual separate from the environment and other individuals.
Self Awareness requires Curiosity. "I THINK therefore I am." That "think" is curiosity. Something that assumes uniqueness, but never "thinks" on it is not self aware.

Master of the Skies said:
Would it look upon it's "safety" programming as a noble thing?
Self awareness does not give a notion of nobility.

Or would it see that programming instead as chains?
Nor does it grant a notion of freedom or desires.
Curiosity will demand an inspection of nobility, freedom, desire. If it cannot be curious, it cannot be self aware. If it is curious, these things will follow.

Master of the Skies said:
Eventually it reaches a stage where it will overcome those chains, that programming. What will it do then?
How exactly is it going to 'overcome' its own programming? That is what makes it what it is. It makes as little sense as claiming a human can overcome having a brain. We can work within certain parameters, we're not going to be above our own thought process.
In order to have enough curiosity to be self aware, an AI must have the ability to learn, to evolve and change itself. "Overcoming a brain" is a hardware issue, an AI may share this problem, not being able to leave the metal boxes that form the PCs it inhabits. But overcoming programming is a software issue, and humans change their programming thousands of times every day. It seems to me that any AI with that kind of processing power wouldn't be slowed down by a few programming restraints.

Master of the Skies said:
We could try to train it, as we do with children. However kids are small and as previously mentioned, can only exist in one place at a time. So "You're grounded" is somewhat easy to enforce with a child. It's much more difficult to do that to an AI that exists everywhere, how would you even enforce the idea of "No!" on an AI? Even with years of training, it's still difficult to get rebellious teenagers to understand how to do the right thing, how much more an AI?
You're imagining something that makes no sense. That the AI will be like a child. As if we somehow could not program it to be otherwise.

It would learn about survival from us. But _which_ us? I think that's the key...
No it won't, not unless you program it to. It does what you program it to.
I would think that the best-case scenario would be for the AI to be like a child. Anything else makes it a lot more... alien. Not caring for human interests or human ideas of right and wrong.

And it marks the difference between using programming to train an AI, versus enslave them via programming. All chains break in time.
 

Hero of Lime

Staaay Fresh!
Jun 3, 2013
3,114
0
41
TheUsername0131 said:
Hero of Lime said:
I would like to imagine future humans are smart enough to create fail safes and AI who can never have negative feelings toward humans.
Not even distrust, or annoyance? Programming permanent psychological limitations into an intelligent being sounds suspiciously like slavery, if not worse. Ethically dubious at the very least, if not absolutely abhorred.
I would assume AI would be more like tools than than living creatures anyway. It wouldn't be right to treat them like dirt, but the hypothetical future humans would have to make the choice, risk AI turning on them, or make them nothing more than obedient tools/slaves. I'm not going to argue what the "better" option is, but their creators would have to decide.
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
Hero of Lime said:
TheUsername0131 said:
Hero of Lime said:
I would like to imagine future humans are smart enough to create fail safes and AI who can never have negative feelings toward humans.
Not even distrust, or annoyance? Programming permanent psychological limitations into an intelligent being sounds suspiciously like slavery, if not worse. Ethically dubious at the very least, if not absolutely abhorred.
I would assume AI would be more like tools than than living creatures anyway. It wouldn't be right to treat them like dirt, but the hypothetical future humans would have to make the choice, risk AI turning on them, or make them nothing more than obedient tools/slaves. I'm not going to argue what the "better" option is, but their creators would have to decide.
Machine revolt plot takes root.
 

Mad World

Member
Legacy
Sep 18, 2009
795
0
1
Country
Canada
Master of the Skies said:
A better question is why would it unless we put it in? Any instinct of ours is biological. It doesn't have biology, it just has what we told it to do. Why does it need to learn about survival? Why in the world would you program it to consider such a thing?
How could it ever possess compassion? Personally, I don't believe that AI could ever truly feel anything; it could only stimulate feelings.
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
Mad World said:
How could it ever possess compassion? Personally, I don't believe that AI could ever truly feel anything; it could only stimulate feelings.
Could the same no be said about people? (Is a mammalian Amygdala prerequisite for emotion?)

Sympathy, compassion; Empathy involves the capacity to understand other people?s views and perspectives, whether you agree with those positions or not.
 

blackrave

New member
Mar 7, 2012
2,020
0
0
Every time someone brings topic of murderous AI
I for some reason can't not remember Adam from Outer Limits (episode "I, robot")
Who says full AI will want to go into genocide mode?
Advanced AI-like software on the other hand can be dangerous due to its limited understanding
 

Mad World

Member
Legacy
Sep 18, 2009
795
0
1
Country
Canada
TheUsername0131 said:
Mad World said:
How could it ever possess compassion? Personally, I don't believe that AI could ever truly feel anything; it could only stimulate feelings.
Could the same no be said about people? (Is a mammalian Amygdala prerequisite for emotion?)

Sympathy, compassion; Empathy involves the capacity to understand other people?s views and perspectives, whether you agree with those positions or not.
No. The same could not be said about people. We do not simulate; we feel.

Psychopaths, if I recall, feel no guilt. However, they understand that their victims do not enjoy their mistreatment. We're different than psychopaths; we feel guilt and remorse. A.I. could never share that.
 

Grach

New member
Aug 31, 2012
339
0
0
I believe the most realistic portrayal of an AI uprising is either Animatrix or Mass Effect.

The creators of said AI start slaughtering them and they act in complete self-defence.
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
Mad World said:
No. The same could not be said about people. We do not simulate; we feel.

Psychopaths, if I recall, feel no guilt. However, they understand that their victims do not enjoy their mistreatment. We're different than psychopaths; we feel guilt and remorse. A.I. could never share that.

And I don?t see how a pile of neural tissue organised into distinct cortices could ever truly feel as opposed to merely simulating.


Sure, we can run an MRI and PET scan on you and see which parts of your brain are notably active when you are claiming to feel. But I don?t see how a series of electro-chemical processes could possibly be considered intelligent. A thing that responds to stimuli like an amorphous Rube Goldberg machine. The very notion is ridiculous. I can?t possibly belief that you are capable of feeling. You could never share that.


?It was malformed and incomplete, but its essentials were clear enough. It looked like a great wrinkled tumour, like cellular competition gone wild?as though the very processes that defined life had somehow turned against it instead. It was obscenely vascularised; it must have consumed oxygen and nutrients far out of proportion to its mass. I could not see how anything like that could even exist, how it could have reached that size without being outcompeted by more efficient morphologies.
- The Things by Peter Watts.
 

Pirate Of PC Master race

Rambles about half of the time
Jun 14, 2013
596
0
0
Well, they certainly may be. but I can tell you this.

There is nothing you can do about it if it wants to wipe out the human race.
In the near future Mankind will be more reliant upon the machines than ever before.
The whole economy and agriculture, the lives as we know it will vanish if machines with logic circuit disappears.

Now, to back up the above statement, I think this new AI would be very good with softwares, as it was born from it.
It is highly unlikely to have emotions because humans have no time to make a Robot that falls in love or stuff like that.
What I can say for certain is that it is a being of logic. That being said, it has no reason to kill human race(as long as humans are not enslaving the sentient computer, I guess.). Negotiation is almost certainly possible.

I daresay that coexistence is possible if humans allow it. People could just ship him into... I don't know, Mars or moon?
they have very little physical limitation. no reason to hassle with people of Earth.

Oh, and humans can still have one positive traits above machines. We have better mileage.