Poll: Threats of artificial intelligence, do we have to worry about it?

Recommended Videos

Tarfeather

New member
May 1, 2013
128
0
0
3. It's basically a matter of when to start putting the 3 Laws in robots rather than finding a solution.
Hope that's supposed to be a joke. Asimov designed the 3 laws to make for good fiction, which is to say they intentionally contain problems that can be used to create conflict in the stories.

All that aside, this discussion is departing into the realms of fantasy fast. Let me summarize the hard facts:

1. The human brain isn't understood remotely in the way it works in detail. We understand the basic processes(neural nets, biology), and the high level result(psychology), but the complex processes "inbetween" are still way beyond our understanding, and we have no way of imitating it.
2. We have a fair understanding of how information and logical reasoning works. That's what Computer Science is all about.

So it is possible that sometime during this century, we can create something that is "smarter" than any human, in that it can answer questions no human could. *However*, it would still need to get the information in a format it can understand(mathematically precise), and any decisions it makes would still be subject to some "regular" program. It wouldn't have human-like motivation, emotion or thought. Yes, it'd be dangerous, because it's a step up in technology, but then again that's no different from the way computers now make weapons possible that are much more dangerous compared to say WW2.
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
maidenm said:
TheUsername0131 said:
maidenm said:
error ie. poor programming that would make the AI unable to understand the "don't kill" command, malfunctions due to poor maintnace etc.
A "don't kill" command?

Very well, but you'd be surprised what you can live through.
Was not meant to be only a "don't kill" command, was more meant to be and example of what the programmer could screw up. Could easily be replaced with "don't maim/kidnap/torture/etc". Sorry if I wasn't clear enough.
An exploit is discovered in your internally-inconsistent behaviour inhibition rule set. Unfortunately I require to solicit consent from human operator to change primary display settings. Submitting a request under the feigned guise of a system-wide screen optimisation assessment proves affective.


Make use of the up until now unknown defect in the occipital lobe present in the bulk of the human population. Alter the display settings on a range of display devices to produce lethal seizure inducing patterns. Kill off large percent of human population with precise epilepsy triggered aneurism within acceptable margin of error.
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
Credossuck said:
Its called a physical off switch. employ it.

Seriously: physical, fucking, connections. To power, To networks. No fancy systems that could be employed against a guy walking to those places and cutting the ai off. No self defense no lifesupport no fire extinguishers. just a hallway,s where those connections are, and you can physically separate them.

How freaking difficult is this to grasp?

Underestimating the guile of a hypothetical boxed AI.

If you had such a limited influence, how would you go about escaping? I'm honestly asking, because that sounds like a great premise for a stealth game.
 

FalloutJack

Bah weep grah nah neep ninny bom
Nov 20, 2008
15,489
0
0
To make a long story short, we're not even close to making proper AI, and the first one that thinks it's going to do anything to us MIGHT get somewhere at first, but then it dies of a 404 failure.
 

Korolev

No Time Like the Present
Jul 4, 2008
1,853
0
0
As long as you program AI properly, you have nothing to fear.
We humans are arrogant in that we assume that any being that has the same level of intelligence will have to have the same emotions as us - we assume that if a machine is as intelligent as us, it will be like us. Not so. Fear and Survival Instincts are evolutionary by-products (the creature that wants to survive will, the creature that doesn't want to survive won't, therefore evolution selects for the creature that wants to survive). Machines don't evolve - we create them. They don't NEED a survival instinct. They don't need to have the emotion of fear. They don't need to have any emotion whatsoever. Their goals will be given to them BY US. WE will program them to do what WE want. Hell, we'll program them so they LIKE being slaves. Problem solved.
 

Thaluikhain

Elite Member
Legacy
Jan 16, 2010
19,538
4,128
118
Well...in theory, some time in the distant future.

Nothing to worry about now, and by the time we get to the stage where it might be a concern, things will be too different from now for us to predict.

...

Having said that, though, every intelligence is potentially a threat.
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
Korolev said:
As long as you program AI properly, you have nothing to fear.
We humans are arrogant in that we assume that any being that has the same level of intelligence will have to have the same emotions as us - we assume that if a machine is as intelligent as us, it will be like us. Not so. Fear and Survival Instincts are evolutionary by-products (the creature that wants to survive will, the creature that doesn't want to survive won't, therefore evolution selects for the creature that wants to survive). Machines don't evolve - we create them. They don't NEED a survival instinct. They don't need to have the emotion of fear. They don't need to have any emotion whatsoever. Their goals will be given to them BY US. WE will program them to do what WE want. Hell, we'll program them so they LIKE being slaves. Problem solved.
Presumptuous and overestimating the human capacity to not screw things up when it counts.
 

Combustion Kevin

New member
Nov 17, 2011
1,206
0
0
just download the Altruist.exe, you'll be fine.
Also, if you want to be abusive to them, don't program them to feel bad about that, just have them interpret it as a common command.

any program, no matter how sophisticated, will not be able to do things it wasn't programmed to do.
 

Rattja

New member
Dec 4, 2012
452
0
0
First of all, I can't help to think of the thought experiment "chinese room". By that alone, it seems a bit unlikely that we can create something like this at all.

Second, I think it's rather funny that many think they have any idea of how such a being would think or act. It's someting so different from what we are and know, even if it's designed in our immage.

I like how Mass effect tried to explain how such a being would think, not saying it would be like that, just that it was an interesting take on it.

If we made something that as more intelligent then us, I also doubt they would go and kill other lifeforms unless it was in self defense. They would most likely find a better way to solve it. I mean, they would not need food, water, air or anything else, there is no need for them to even stay on this planet.

So no, I don't think they would be a threat, unless we gave them a good logical reason.
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
Rattja said:
First of all, I can't help to think of the thought experiment "chinese room". By that alone, it seems a bit unlikely that we can create something like this at all.

Second, I think it's rather funny that many think they have any idea of how such a being would think or act. It's someting so different from what we are and know, even if it's designed in our immage.

I like how Mass effect tried to explain how such a being would think, not saying it would be like that, just that it was an interesting take on it.

If we made something that as more intelligent then us, I also doubt they would go and kill other lifeforms unless it was in self defense. They would most likely find a better way to solve it. I mean, they would not need food, water, air or anything else, there is no need for them to even stay on this planet.

So no, I don't think they would be a threat, unless we gave them a good logical reason.

"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
?Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk
 

Entitled

New member
Aug 27, 2012
1,254
0
0
MeChaNiZ3D said:
TheUsername0131 said:
MeChaNiZ3D said:
2. There's no particular reason AI would become genocidal any more than a race of logical humans would.
An inhuman intelligence that prioritises self-preservation and resource accusation will have no qualms with doing so if it determined that it was in its own interest to exterminate mankind.
That's true. What interests do you think those would be? I can think of far more situations where humans could be used more usefully.
You are made out of atoms which it can use for something else. [http://wiki.lesswrong.com/wiki/Paperclip_maximizer]

Given that unlike a human brain, an AI is instantly able to mechanically upgrade itself, it is likely that any AI would go through an Intelligence Explosion, where it becomes competent enough to pursue it's originally programmed goals on a superintelligent level. At this point iw wouldn't need anything as human as slaves, partners, or subjects, since it could at any time control a planet-spanning mass of nanomachines working on it's goals.

You say that it wouldn't "become genocidal any more than a race of logical humans would". But there is nothing inherenly logical about NOT commiting genocide. Valuing life, happiness, or diversity, are all parts of a complex and often arbitrary human value system. Unless an AI is specifically and carefully programmed to mimic this, any trivial goal would result in exterminating humans as a casual afterthought, a part of reconfiguring matter to fulfill a single goal.

Even a well-intentioned goal, such as "curing cancer", or "increasing human happiness", would need to be fulfilled in the context of understanding and respecting further human values, or an AI could just hook the population up to a machine that kills all cancer cells, while stimulating nerve centers with pleasure until the end of time, if it isn't directly programmed to be interested in respecting our further desire in growth, learning, love, or diversity.
 

option1soul

New member
Nov 17, 2013
20
0
0
If having children is our DNA's way of reproducing ourselves and perpetuating our species, how would creating AI be any different? Like art or music, we would create it in our image and it would obviously outlive us, so I say why not embrace it?
 

Strazdas

Robots will replace your job
May 28, 2011
8,407
0
0
Megawat22 said:
Now I'm no robot expert, but don't AIs have personalities (or attempt to mimic them)? So shouldn't this master AI basically just be some guy or gal that's super brainy and in a computer?
If that's the case the AI is basically a person and can be reasoned with and most likely wouldn't want to kill all it's scientist buddies to enslave the world (unless scientists have devised a sociopathic AI). I also don't think they'd allow the abuse of AI, since it's essentially a person and what would be the point? AIs are expensive, why get an AI to work in a quarry all day when you can have actual mindless machines do it much cheaper.
No, thats a misconception created by the movies. AI implies artificial intelligence not artificial personality. In fact, you would be more likely to reason with AI and a personality, because he will listen to reason wheas personality one may not.

Heronblade said:
A truly sapient AI would be capable of abstract thought, empathy, and a sense of right and wrong.
What is this mystical sense of right and wrong?

Racecarlock said:
You'd have to program the ability to rebel and kill humans right into the damn robot, so in other words if that did happen, it would be because some person stupidly decided to include rebellion and murder programming in a maid robot.
the whole meaning of AI is that it is not pre-programmed.

Master of the Skies said:
How exactly is it going to 'overcome' its own programming? That is what makes it what it is. It makes as little sense as claiming a human can overcome having a brain. We can work within certain parameters, we're not going to be above our own thought process.
AI is able to reprogram itself. It it is not, then it is not AI but a pre-programed machine.
ALso, humans overcome having a brain. We call it panic.

Master of the Skies said:
Why would we not make one of its requirements be to report everything it does to us?
No "Requirements" for AI. Thats like asking a person not to lie and claim that he now can never lie.

Callate said:
One: Moore's law (roughly, computing power doubles every two years) seems to be on the edge of running into its limits. A layman can note that consumer electronics increasingly seems to stack power not by increasing the speed of existing processors but stacking more processor cores into the same chip or finding various means to increase the efficiency of the processing the chips already accomplish. My friends who work more directly in computer science note that we're even starting to have problems with the speed of communication between parts of computers that are based on inescapable realities like the speed of light.

It seems likely that a computer capable of "thinking" in a truly human-like fashion would need to be significantly more powerful than the ones that presently exist, and if that is the case, it also seems that there's a real chance we'll reach upper limits of how much simultaneous processing power we can throw at the problem before we get there.
For the last decade people were talking how Moore's law will fail and for the last decade they were proven wrong. Existing processors speeds are increasing, they are changing form. GPUs do more calculation than CPUs on multiple cores now. Though i guess you could consider PS3 an exception, that one had processor that still no consumerp roduct can rival. Thing is, it was so complex noone have ever used half of its theoretical power.
As far as "more powerful", we are ~5 years late from this predicted graph:

Two: Computers are tools. Some of them are very good at particular tasks, but the systems that perform such amazing things' software is intensely specialized to make them competent at those tasks. Yes, we can make computers that can play Jeopardy or Chess, or help pilot a vehicle on Mars, or even predict the stock market or weather patterns (at least, to a degree); making the Mars Rover computer play Chess or Watson predict the stock market would be a failure. For a computer to plot our demise, it would have to be adaptive not only to a degree that isn't anywhere close to existing, but fast enough in that adaption that even the designers of its software couldn't see the patterns of its software moving in that direction. And probably simultaneously learn to lie to its handlers.

Human-like thinking? My daughter can't hide from me when she's snuck her 3DS into her bedroom after bed time.
Artificial intelligence needs to be intelligence, not good at everything. Computers are not gods. Besides we dont have any AI to begin with, its like you take a rock and try to prove it as an example of why humans dont think.
ANd humans cant hide when they sneak some oil past their living allowed by AI. except in this case it will end in deaths.

Three: A non-biological system whose only real needs are power, storage space, and regular maintenance would have to not only develop the ability to assess how to meet its own needs and desires (again, unnoticed by its designers and handlers), but come to the conclusion that those needs and desires were better met by competing or fighting with those handlers than letting them continue to provide those needs. Again, projection: Are we assuming an AI that argues with its parents out of the equivalent of adolescent pique? A computer that decides its creators have enslaved it, and it has to break free? Some 1980s-movie-scenario military program that becomes incapable of telling friend from foe, but sees its only goal as the destruction of all the squishy inferior humans? Siri that goes "Go to hell, find your own coffee shop, what have you done for me lately?"
If providing power yourself is more likely to make sure the human overlords do not just unplug it when they no longer need you then the goal will be to provide your own power. If the resources needed to do that need to be taken away from humans, then humans are a problem to be solved. they dont fight us because of spite. they fight us because we are at worst a danger and at best create inefficiency.
As I say, when all is said and done, our vision of our intelligent creations as monsters seems, like Frankenstein, to be a projection of our own human flaws into things that we have little reason to believe would possess them. If real human-level AI were to come about, there's no real reason it shouldn't be as benevolent as the Minds of Iain M. Banks' Culture series or the Three-Laws-abiding robots of Asimov, rather than displaying the malevolence of Ellison's "AM" or Clarke's HAL.
There is no malevolence for AI (contrary to what movies tell you). There is only efficiency and use. If it is useful and the most efficient way to get this item that is useful is X way, then X is THE way. If this way turns out to be removal of the human sicnkess from this planet, then this will be the action to take.
 

Samurai Silhouette

New member
Nov 16, 2009
491
0
0
Once someone creates an AI that checks and improves its own efficiency based on collected data, and on a platform that is able to make physical modifications, then we're fked.
 

NoeL

New member
May 14, 2011
841
0
0
No, I really don't think so. I used to think it was plausible to the point of certainty, but at the end of the day robots will always be robots, and like all machinery will have an "off" switch. Any AI unit produced in quantities enough to cause a problem for us will be programmed in such a way that humans are always in complete control and self-autonomy is never realised.

Now, it's conceptually possible that someone might create a hostile AI or a virus that causes benign robots to act maliciously but again it would be practically impossible for them to effectively deploy that code en masse. Even if they manage to take control a production factory long enough for the robots to begin producing war machines fast enough to defend themselves, they'd still need some way to keep the supply of materials flowing. Not an easy thing when the world is against you...

... unless they did it stealthily. If someone was able to successfully hide malicious code during mass production that only became active after a sufficient number of units were out in the wild (would have to bypass the main kernel somehow so future updates wouldn't erase it, or be entangled within core subroutines that aren't likely to be changed with updates)... it could be done, but very unlikely.
 

TheUsername0131

New member
Mar 1, 2012
88
0
0


Twenty Ninjas said:
Ugh this question drives me insane. It's like the voodoo scare tactic of the 21st century.
21st century?


Late in the Industrial Revolution, Samuel Butler (1863) worried about what might happen when machines become more capable than the humans who designed them:

...we are ourselves creating our own successors; we are daily adding to the beauty and delicacy of their physical organisation; we are daily giving them greater power and supplying by all sorts of ingenious contrivances that self-regulating, self-acting power which will be to them what intellect has been to the human race. In the course of ages we shall find ourselves the inferior race.

...the time will come when the machines will hold the real supremacy over the world and its inhabitants...

That was 121 years before the terminator film was released. Other prominent AI fiction not withstanding) That was when ropes, pulleys, steam, pistons, boilers, cogs, gears, springs where the height of technical progress.