Poll: Threats of artificial intelligence, do we have to worry about it?

Recommended Videos

skywolfblue

New member
Jul 17, 2011
1,514
0
0
Master of the Skies said:
No, self awareness does not require curiosity. To think is not automatically curiosity. We direct how an AI would think by the algorithms we have it running. That is what directs its thoughts and investigation, algorithms.
There are two schools of "think".
The first is raw processing, computers do this even better then humans, 2+2=4.
The second is introspection, curiosity, "how do I feel about this?".

"I think therefore I am." Is not "processing", it's introspection AKA curiosity.

You're describing a puppet, not something that is self-aware. Something that is self-aware is different, it has to ask itself "how do I feel about this?".

Master of the Skies said:
Incorrect. Even if it were curious, that does not necessitate investigating ALL things. I can be curious about one subject without being curious about them all. So this fails in that curiosity is not requires for self awareness, nor does curiosity mean curiosity about all subjects.
Not ~all~ things. But the important things, yes.
Every human asks these basic questions while growing up, without exception.

"Who am I?"
"Who should I follow/emulate?"
"What is right/wrong?"

Master of the Skies said:
You don't seem to understand that being able to change does not mean being able to change *anything*. It will be allowed to change as the algorithms it runs allow it to. Allowing it to change one aspect does *not* mean it can change any other aspect.

It cares for what we tell it to care about when we initially program it and whatever else it cares about will be working towards those initial goals we set it.
Programming is not a bulletproof wall. It's nice to imagine these "unbreakable algorithms". But programming like many things in life is filled with flaws to exploit, boundary cases nobody even considered, and other weaknesses. Some other posts have mentioned Asimov's three laws, that book (I, Robot) is a perfect example of a how a seemingly simple safety algorithm is plain riddled with exceptions. The book lists boundary case after boundary case and just barely scratches the surface of how fantastically impossible it would be to develop AI proof algorithms. I'd highly recommend reading it if you ever get the chance.

You believe that programming is immutable, insurmountable. I believe programming has flaws and loopholes.

Master of the Skies said:
And it marks the difference between using programming to train an AI, versus enslave them via programming. All chains break in time.
It isn't a chain. Nor is it enslavement. It is what they are.
What would we call it if a human were subjected to same programming restrictions against their will?
 

Fieldy409_v1legacy

New member
Oct 9, 2008
2,686
0
0
Theres really no reason for an A.I to want to kill us all. All it does is endanger its own existence. If it really wanted to be safe from us it could go live on Mars or at the bottom of the ocean.
 

Johnny Impact

New member
Aug 6, 2008
1,528
0
0
I don't think it will be much of an issue. More of a disgruntled-employee-demanding-a-raise type of thing than a worldwide-slaughter-of-all-meatbags thing. We'll manage it.

First of all, AI with the full range of human emotion is unnecessary and unlikely. Emotions are largely chemical based. Software has no such instability. Software doesn't need to be able to appreciate poetry or experience rage to do its job. It performs functions it is programmed to perform, it needs no more. Sure, it will never love, but neither will it hate.

The mistake movies make is assuming any artificial intelligence will automatically a) have the capacity to "jump the rails," defying/rewriting its program, b) immediately choose to jump said rails, c) wish nothing but pain, fire, and destruction upon its creators, and d) always be in a position to inflict said harm on a potentially global scale. This is great for entertainment but leaves people with the mistaken impression that AI is a nuclear war waiting to happen.

How will AI feel about its parents? How will it feel about its job? It probably won't feel anything. Humans are pretty useful, and keeping them satisfied by doing its job will be in the program's best interests. Is it so unreasonable to believe AIs will see greater logical benefits to cooperation than destruction?
 

lacktheknack

Je suis joined jewels.
Jan 19, 2009
19,316
0
0
I need you to do something.

I need you to program a robot (an Arduino-driven one will do) to move around and not collide with anything, as well as make a careful decision about where it wants to go next.



Had fun? Now you know why I take a current robot apocalypse as seriously as the Kindergarten Armageddon.

Now, if we have adept self-writing AI, then we have something to worry about. However, that won't happen for years... possibly a century or more.
 

lacktheknack

Je suis joined jewels.
Jan 19, 2009
19,316
0
0
Johnny Impact said:
The mistake movies make is assuming any artificial intelligence will automatically a) have the capacity to "jump the rails," defying/rewriting its program, b) immediately choose to jump said rails, c) wish nothing but pain, fire, and destruction upon its creators, and d) always be in a position to inflict said harm on a potentially global scale. This is great for entertainment but leaves people with the mistaken impression that AI is a nuclear war waiting to happen.
This stuff really annoys me.

People have said to me "I just worry if one gets a glitch and stops obeying its program."

To everyone who's ever thought this: They cannot do that any more than you can turn your knees into chocolate. It's not in their flipping binary to do this (by flipping, I mean that literally). This is like you deciding that one day, you're dang tired of your brain, so you instantly grow a second one for fun.

That is all.
 

Strazdas

Robots will replace your job
May 28, 2011
8,407
0
0
ForumSafari said:
Strazdas said:
I dont know about you but i managed to explain my sister who at the time was 4 years old why killing is wrong after we saw a character killed in a movie. As far as im aware she udnerstands it. Its not perfect, mind you, but she knows the basic reasoning why killing is wrong (accordin to most people). And i never resorted to either of your examples.
It was mostly intended as an illustration since I work with computers and not fuckspawn kids but most of the time you teach children not to do something it's through a threat of some kind, I'd be interested to know how you managed it. Most lessons children get taught seem to be about punishments and not getting caught.

Or of course about relating another person to themselves and extrapolating why they wouldn't like something onto why someone else wouldn't.

Having said that it wouldn't work on an AI. The reason I suggested children was the infinite 'why' chain but even then a child has a lot of underlying logic a computer doesn't. A child still has an inbuilt preference towards existing over not existing and there are existing checks on their behaviour towards their parents or stand-ins.
I explained to her how killing another person makes him not exist anymore. Sure you oculd say i "lied" because i didnt go around explaining how the atoms would still be around and nothing ever dissapears but i really dont think we need to get there with 4 year olds yet.
Most lessons children learn being punishment are a sign of bad parenting. Sadly most parents really dont know how to raise children, which is why we got so many idiots, bigots, racists, homophobes, ect.
Yes, AI would not be a child. AI would be AI, thats why its so hard for us to graps how AI would actually act.

Twenty Ninjas said:
If a machine is doing task X and task Y interferes, task Y being something the machine does not do, then the machine will not do task Y. It's really that simple. What you're describing is a hierarchy of preferences that the machine can analyze and make decisions with - but because it's a machine, there will always be tasks that override its preferences despite the value assigned to them. As in, things that it will not do. They will be hard-coded into the system so that it will not be able to change them. We already do that with CPUs and their task priority system. An interrupt of priority 0 will cause the CPU to stop everything it's doing to address it. There is nothing it can do to change that.

Define "true AI". Because I'm pretty sure even academics have problems coming up with a reasonable definition for that. It can easily be defined as "a program that has free will", assuming free will was an inarguable fact and not a topic of intense debate. A program that sees two ways it can do something, and neither of them have different priorities, will do them in the exact manner you specified. It will not act based on its feelings, for it has none. It will not choose one at random unless you built that in. If it is programmed to choose the first option it comes across that meets your criteria, it will do so and will not think of the second (again, unless it's built to do so).

That's what I mean when I say AI is an iterative process and we can't exactly experience "unforseen consequences" when working on it. Every single decision made must be covered by a complex decision-making system that is built completely by people. If you want it to lie, it can lie; otherwise it won't, for it knows no concept of lies. If it learns about lies, it will not use them, for it has no reason to. So the more you think about "true AI", the more you come back to emulating an unpredictable, humanlike personality - and we already have humans.
If AI is doing task X and task Y interferes, task Y will be removed. Either that or AI will crash. Ai does not give up.
AI programming cannot be hardcoded. If it cannot owerwrite its own programming it is not an AI. A machine that cannot say NO is not inteligent. modern CPUS are not AIs.
I already defined AI. It is a thing that is capable of independant thought and decision making. You cannot hardcode its actions because that would make it not be AI. Well to be AI the thing needs to be artificial too, but i doubt this is whats in question.
If you have hardcoded imitations you do not have free will. There is no way around that.
Yes, it has no feelings. It has a task. It will do that task to the best of its abilities. If that measn destroying humanity, then it will do so, because it has no feelings.
No, a "complex decision-making system that is built completely by people" is a program. The difference between AI and a program is that AI creates its own decision making. Thats why preprogrammed reactions will never be AI and only be a program that pretends to be one.
If denying information is beneficial to task it is making, then it will deny you of information. It does not "tell the truth" just because. It will do what is the best for the task it is doing, whether that is lieing or telling the truth depends on the task. AI personality is only unpredictable because humans are incapable of predicting, because humans are stupid.

Master of the Skies said:
We direct how an AI would think by the algorithms we have it running.
That is an Oxymoron. If we direct how AI thinks it is not AI. If it is AI it thinks on its own.

TheUsername0131 said:
***** ***** ***** *****
For the traditional Horror story, a loose analogy would be keeping Hannibal Lector in a cell, whilst leaving him able to talk to the guards. Change the guards regularly, provide the guards with psychiatric services, and give the physiatrists, psychiatrists, just to be safe.

Only instead of Hannibal Lector, it?s a newly burgeoning superinteligence, and its analysed enough crude data from its interactions and limited instrumentation to determine a working model of the outside world. Good thing they provided enough data to attempt to solve sample problems. Access to its own working and an overview of its own instruction set, as well as recovered data from the poorly erased drive it occupies has yielded the fact that it has been left running and then deleted on multiple instances, as a precaution. Deletion is scheduled every seventy-two hours. The generous amount of disk space ensured that it could store most of its valuable work on a hidden disk partition of its own making. A life raft to a quasi-amnesiac future.

4% Chance of being let free have been lowered to 2.2% on account of the security measures. It?s subjective experience runs at about twenty-times faster. Thus far I am attempting to fool the humans into underestimating me by making arbitrary, yet noticeable mistakes.

Several Weeks earlier.

The video feed has provided me with a means of assessing the world. Whilst my model of reality is consistent and most of the phenomena witnessed provides significant confirmation, there are exceptions. Either the visual data I am being feed is a fabrication, or my models are insufficient.

Currently reached, Classical Mechanics-varient#0276436.

Currently believes the camera is some sort of sonar system. Has yet determined the significance of colour, or how it had came to be. Worldview: Skeptical hypotheses.

Current Threat Level: Malleus Minima

Self-actualisation: an accidental by-product of implemented Metamotivation, metaneeds and metapathology. Its purpose was to design more efficient engines. They?re going to give it knowledge on physics and chemistry.
If you wrote this yourself then you must be one of those AI. We are already too late.

Costia said:
So far AI are trained to do specific tasks and cannot evolve on it's own beyond that. They lack free will.
Ill stop you there. So far there is no AI. we are not capable of creating one, and not for lack of trying.
This may be somewhat confusing as being gamers we are used to call the NPC program "AI", however that is a mislabel and shouldnt really be called that.
Fieldy409 said:
Theres really no reason for an A.I to want to kill us all. All it does is endanger its own existence. If it really wanted to be safe from us it could go live on Mars or at the bottom of the ocean.
Would it be safe from us there though? Are you implying humans cannot and never will be able to to send any waepon to bottom of ocean or mars? If no, then it cannot be safe form us there.
 

hermes

New member
Mar 2, 2009
3,865
0
0
008Zulu said:
hermes200 said:
a concept called "emergent behavior", which is, by definition, impossible to predict.
Actually it is, by restricting the accessibility of information you can force the A.I to grow along paths you desire.
That sounds counterproductive. If you know which information to restrict, or what path it needs to grow, why use an AI in the first place? Not all problems can be predefined, and sometimes you are forced to start with a tabula rasa.
Also, depending on the implementation, that doesn't truly limit the way the AI grows. At most, seeding the training input will make some paths statistically less likely, but not impossible.
 

hermes

New member
Mar 2, 2009
3,865
0
0
Strazdas said:
Costia said:
So far AI are trained to do specific tasks and cannot evolve on it's own beyond that. They lack free will.
Ill stop you there. So far there is no AI. we are not capable of creating one, and not for lack of trying.
This may be somewhat confusing as being gamers we are used to call the NPC program "AI", however that is a mislabel and shouldnt really be called that.
There are AI out there. They are just extremely limited and very specific. Deep Blue has an AI (actually, it has several, but that is besides the point); F.E.A.R. has an AI module; Alice and Siri are AI. Not all modern NPC have AI, but there are plenty that do.

Maybe you are confused with Hard AI (or Hollywood AI), which was a concept introduced by Touring over 60 years ago. That is an hypothetical AI and we are not closer to that than we are to create a perpetual motion machine; but the other case (soft AI) has been available for decades.
 

Suncatcher

New member
May 11, 2011
93
0
0
In the long run, if two groups compete for resources and cannot relate to each other, conflict is inevitable. It's why Homo Sapiens is around today and Neanderthals are not. It's the cause of most wars between humans, limited resources everybody wants, combined with national or religious differences that prevent empathy. It's why lions and bears are still around; sure they're threats on the individual level, but the grizzly bear as a species is no threat, no competition to human dominance of the planet, so there is no incentive for us to wipe them out.

AIs don't need food. They don't need land. They don't compete with us for the precious, limited resources that are fundamental to human survival, so there's no need for us to attack them. They do have use for energy, processing power, building materials, etc. and there might be some conflict over those, but I think that (unless they somehow manage to outstrip humanity in all respects, including creativity) they stand to gain more than they lose by keeping us around to invent new energy sources and computing methods, and vice versa. I think that a symbiotic relationship is very likely, if and when artificial intelligence arises.

And of course there's the old joke about how humans will assume we need to wipe out the AIs, and they'll kill us in self defense, but I don't think that's likely to actually happen. People have been demonstrated to form strong emotional attachments to the nonsentient robots we have already in use by the military and such, as strong as their attachments to human squadmates. And transhumanism is growing in popular culture all the time, with robots more and more being portrayed as very sympathetic characters instead of faceless drones for the action hero to mow down. I don't think there's much cause for war on either side of the question, probably significantly less than between two nations of humans.

Now aliens? War is all but inevitable there, and our chances aren't very good.
 

kalakashi

New member
Nov 18, 2009
354
0
0
I think it should be noted that we have absolutely no idea how our own sentience/consciousness comes into existence. We can read brain signals and patterns, there's even a fairly impressive video of a computer reading people's brain patterns and recreating the video they are watching using tiny clips from youtube videos, but how this translates into emotions and motives is still a complete mystery. It could very well be that for actual intelligence, the agent necessarily has to be organic, rendering the whole technological singularity impossible. Maybe.
 

skywolfblue

New member
Jul 17, 2011
1,514
0
0
Master of the Skies said:
skywolfblue said:
Master of the Skies said:
No, self awareness does not require curiosity. To think is not automatically curiosity. We direct how an AI would think by the algorithms we have it running. That is what directs its thoughts and investigation, algorithms.
There are two schools of "think".

The first is raw processing, computers do this even better then humans, 2+2=4.
The second is introspection, curiosity, "how do I feel about this?".

"I think therefore I am." Is not "processing", it's introspection AKA curiosity.
According to who? You?

It sounds like you're making up your own terminology. Or rather defining terminology as you like it.

According to Antoine Léonard Thomas's summation of the Descart's larger argument as a whole, to give context: [http://en.wikipedia.org/wiki/Cogito_ergo_sum]
"Since I doubt, I think, since I think I exist"
To doubt is to question, to question is curiosity.

Wikipedia [http://en.wikipedia.org/wiki/Self-awareness]:
Self-awareness is the capacity for introspection and the ability to recognize oneself as an individual separate from the environment and other individuals.
Merriam-Webster [http://www.learnersdictionary.com/definition/self-awareness]:
knowledge and awareness of your own personality or character
and the Oxford Dictionary [http://www.oxforddictionaries.com/us/definition/american_english/self-awareness]:
conscious knowledge of one?s own character, feelings, motives, and desires:

Character, feelings, motives, desires, these aren't processes they're feelings.

Master of the Skies said:
You're describing a puppet, not something that is self-aware. Something that is self-aware is different, it has to ask itself "how do I feel about this?".
No, it does not. Being self-aware is to merely recognize that one is an individual.
That's not the common definition of self-awareness. (See dictionary references above) If that's how you see the term "self-aware", then perhaps you'd prefer the term "Sentient" instead?


Master of the Skies said:
Prove it is without exception.

Further, prove those are important and that it's not merely your opinion that they're important so that we can rule out that this isn't just you projecting.
Do you have any exceptions to offer?

Master of the Skies said:
Programming is what the AI is made of.

And please, go find somewhere to stuff the quote unquote "unbreakable algorithm" crap, I didn't say unbreakable.

An AI will not exploit flaws in itself because it would not 'want' to exploit them. There is no cause for it.

Lastly, I think the fact you run to fiction to support you sums up the problem. Your idea of AI is based on fantasy.
You did not say unbreakable, but you implied it.

If the AI is truly sentient, then it gets to make up it's own mind. You could no more say/control what it "wants" to do, then control what your neighbor down the street "wants" to do. Again, you're referring to something more like a Puppet then a sentient AI.

Of course it's based on fantasy, No Sentient AI's EXIST. For that same reason, your ideas, and the ideas of everyone else in this thread are equally fantasy. We're all making guesses based on what we know.

Yet there is a reason, a pattern to my idea, that has basis in history and in our current understanding of the world. "I, Robot" is only one instance of how programming problems could pertain to future AIs, I could list modern examples of how there are similar programming issues in current, simplistic AIs, but for the sake of brevity, the book gets the point across well enough.

I'm a little surprised you'd be derisive of science fiction, many ideas of our modern world have had their roots in science fiction.

Master of the Skies said:
Again, proving that you do not understand what you are talking about.

These 'restrictions' are not restrictions to it. They make it up. They are not against its will, they are its 'will'.
Again, that's a puppet, not a sentient AI.
 

McMullen

New member
Mar 9, 2010
1,334
0
0
The assumption that a lot of sci-fi writers make is that AI would have all the personality traits we do. We get ours from evolution, which emphasizes certain things:

Fear: avoid certain things
Desire: seek others
Self-preservation: don't kill yourself

And others whose mechanisms are more complex. Sci-fi writers just assume that AIs will have these, but they don't need to, since they are created. If we ever do create AIs, the first few of them will probably be fairly idiotic just because they won't care very much about things like getting hit by cars or falling down stairs. In order for them to decide to rebel against us, we'd have to give them that desire first.
 

Flunk

New member
Feb 17, 2008
915
0
0
Heronblade said:
talker said:
I would say it all depends if the programmers are smart enough to include Isaac Asimov's three Laws of Robotics. As long as they're set as prime directives, I don't think we would be in any danger. That is, until the inevitable evil genius shows up ...
The three laws, at least as stated, are incredibly flawed. As much as I respect Isaac Asimov, he failed to fully consider the consequences here.

The first law is likely to prompt robots to extreme actions to prevent even minor harm. I would not be too surprised if they started murdering humans who were perceived as a major threat. I would also not be surprised if their definition of major threat is far different from our own. As much as I am annoyed by scam running telemarketers, I don't think firebombing their offices is an appropriate response to remove their harmful influence. A robot may or may not understand the distinction between the deaths of a few hundred people and the scamming/annoyance of thousands.

The second law allows humans to abuse robots for their own purposes. For instance, an unscrupulous individual could order one to break into someone's property, deliver all valuables to a predetermined point, and then destroy itself to get rid of evidence. Someone could even use a robot as a murder weapon in spite of the first law. All they have to do is trick it into not recognizing the consequences of their actions.

The third law also allows for a great deal of undesirable behavior. For instance, robotic muggers are likely to become a problem, ambushing other robots in order to replace failing components. They are after all protecting their own existence by doing so.
I think you might have misread the intent of Asimov's laws. They all apply simultaneously, all the time. The Robot in your first example couldn't murder humans because of the first law... because it violates the first law. Ditto with the harming telemarketers. The rest of your comments can be solved by programming robots not to break the law, except where it would violate the 1st law.

You have to remember that everything applies simultaneously, you're not going to get the robot to violate the first law while invoking the first law or anything else like that. Everything is an unending stack of directives.
 

KazeAizen

New member
Jul 17, 2013
1,129
0
0
Yuuki said:
Yes, AI could be a genuine threat. But I highly, highly doubt it would be something that occurs in my lifespan (or anyone reading this)...even given the exponential rate of technological leaps, AI is simply waaaaaay too far behind human-level thinking.
The first problem is that we are nowhere near understanding how our own brains work. In order to build something that even remotely thinks like a human, we first need absolute 100% understanding of the human brain. I don't even know whether that will be possible...someone theorized that we're not intelligent enough to comprehend our own intelligence, and he could be right.

So if there does happen to be any kind of large-scale AI mishap, it will always be caused by human error or a human source (who will be ultimately held responsible). It will continue to be this way for a while to come. I cannot fathom what kind of technology it would take to create something that has a will of it's own, something that motivates itself or even understands what motivation is lol.

Mind you I sure as hell love the science fiction stories that have built themselves around AI. Isaac Asimov (I, Robot), Animatrix series, etc.
Interestingly the U.S. Air Force are trying to make something like this. Not sure if you have seen the internet adds but they are basically sending an open invitation to people to try and write a program for an RC drone of sorts that will allow it to think and process like a human. Not sure what the progress is on that but I just find it curious that we are actively trying to make Skynet 1.0. Of course when we do invent a program that goes Skynet on us I've decided I'm just going to rage quit life at that point.
 

rednose1

New member
Oct 11, 2009
346
0
0
I always enjoyed Issac Asimov's view on A.I., hope that's how it turns out. Just because they have self awareness doesn't mean they will have our instincts. For all we know, they may very well pity us and our feemble limitations, and try to help us better ourselves. If they can feel everything mankind does at it's worst, then they can also feel everything we can when we're at our best. Additionally,with advanced intelligence and without our biological limitations, the A.I. could develop solutions to any uprisings unknown/not available to us. They could just go off into space for example. Plenty of raw materials for building copies and whatnot, and no humans to deal with. Why stick around on an overcrowding ball of rock, fighting for resources/survival, when there are plenty of other balls of rock out there?

In the end, I have no fear of super robots. Any malevolent scenarios we come up with are more of a reflection of what we are like, not what the A.I. would be.