Poll: Threats of artificial intelligence, do we have to worry about it?

Recommended Videos

Strazdas

Robots will replace your job
May 28, 2011
8,407
0
0
Twenty Ninjas said:
Mimic. A human. Mind. As in, four BILLION years of evolution and gradual selection, emulated perfectly to near-perfectly in all of its details. Think about why we'd NEED to do that. Why, instead of just continuing with what we're doing today, we'd instead go on a totally tangential branch of research and start emulating:

- emotional reactions
- fears
- needs
- desires
- morality
- irrational flaws (!)
- empathy
- happiness
- cultural tendencies
etc

But even more than that, the AI scare assumes we'd just go and give this emulated brainperson... power. For no fucking reason at all. Sure, just put it in charge of the nuclear weapons program, what ever can go the fuck wrong?

I did AI in college. It's completely different from what you might think. Developing a machine that can learn from its enviroment and navigate mazes does not imply that machine has a reason for existing beyond the user's command.
No. Artificial intelligence does not need to mimic human brain. It does not nead emotions, fears, morality, flaws, empathy, happiness or cultural tendencies. These things are not a factor unelss you specifically program it to be, and in which case it is no longer a true artificial intelligence. Because artificial intelligence programs itself. It does not need to be like humans. In fact it is so alien it is hard to percieve for humans (myself included) what it would think. The only conclusion we can come to is that it will use logic in its purest form, which means humans are going to die.
We dont have to give it power. It takes power as means to its own survival.
ANd really, if you did AI in colledge you shouldnt be writing the things you write.

Mad World said:
Yes, I am implying that. How would you go about circumventing the 3 Laws?
3 laws of robotic is a myth and do not work in reality. [http://singularityhub.com/2011/05/10/the-myth-of-the-three-laws-of-robotics-why-we-cant-control-intelligence/]
Also there is absolutely no way you could program ANYTHING into aritificial intelligence. It can rewrite its own coding, otherwise it wouldnt be intelligence. Its a software problem.

Credossuck said:
How freaking difficult is this to grasp?
Very. Basically we should put the Experimental AI prototype in location where blowing wind could completely ruin it and NOT let it connect to any network whatsoever. This is unworkable.

Korolev said:
As long as you program AI properly, you have nothing to fear.
We humans are arrogant in that we assume that any being that has the same level of intelligence will have to have the same emotions as us - we assume that if a machine is as intelligent as us, it will be like us.
You are arogant in that you assume you can program intelligence. You cant.

thaluikhain said:
Having said that, though, every intelligence is potentially a threat.
Which is why the logical solution would be to eliminate all possible threat. Ops, thats not good now is it.

Combustion Kevin said:
any program, no matter how sophisticated, will not be able to do things it wasn't programmed to do.
Which is precisely why Artificial Intelligence is NOT a program.

NoeL said:
... unless they did it stealthily. If someone was able to successfully hide malicious code during mass production that only became active after a sufficient number of units were out in the wild (would have to bypass the main kernel somehow so future updates wouldn't erase it, or be entangled within core subroutines that aren't likely to be changed with updates)... it could be done, but very unlikely.
Sounds very similar to a BIOS virus. Oh, wait, we have those.

Twenty Ninjas said:
You (and everyone else) are also implying that once the learning process begins it'd take basically no time for it to grow out of proportion. If that were true, we as learning machines wouldn't need thousands of years to be able to progress.
We are inefficient. Very inefficient. A robot does not forget. A robot does not loose focus. A robot does not get tired. A robot does not get distracted. A robot processes new information faster. A robot does not misinterpret.
 

Flutterguy

New member
Jun 26, 2011
970
0
0
The speed at which computers are already able to process information by comparison to humans would have me saying yes. However it would seem nigh impossible for us to make a consciousness that capable of thought.

To me augmenting the human brain with computer processing power sounds much scarier and plausible then us being able to fabricate AI capable of the abstract thought capable to incite rebellion. We would program entirely 'left-brained' organisms, only capable of number crunching and repeating actions. If someone was to say show the AI the movie i-Robot that could incite the thought possibly.
 

talker

New member
Nov 18, 2011
313
0
0
I would say it all depends if the programmers are smart enough to include Isaac Asimov's three Laws of Robotics. As long as they're set as prime directives, I don't think we would be in any danger. That is, until the inevitable evil genius shows up ...
 

Strazdas

Robots will replace your job
May 28, 2011
8,407
0
0
Twenty Ninjas said:
Strazdas said:
the whole meaning of AI is that it is not pre-programmed.
Sorry, but I don't know where you got that from. All intelligence is pre-programmed, otherwise it has no reason to function. In the real world, we have our genetic structure. In AI's case, there are programming languages for AI programming.
Intelligence does not need a reason to function. Intelligence is. AI reprograms itself to fit its own needs, which may or may not be tasks human give it.
also ive been meaning to plug this so may as well place it here filer.case.edu/dts8/thelastq.htm [http://filer.case.edu/dts8/thelastq.htm] A computer in this story continues to see solution long after mankind is extinct.

You misunderstood my post. I'm listing how I think AI is perceived by the layman, not what it is. I am also trying to explain why such perceptions are ridiculous.
Indeed i misunderstood it then. My bad.
 

Syzygy23

New member
Sep 20, 2010
824
0
0
Why would we ever delegate to AI in the first place? It'd be easier and cheaper to just wire a human into a computer and let them be the "A.I" (NI? Natural Intelligence? Infomorhphic Entity? Not sure what the correct term would be)

Wetware intelligence would also have the prerequisite experience of living as a human, which brings premade empathy to the table, preventing possible logic-based genocides. That, and when the organic body grows old and dies, the person lives on inside their machine parts, so effective immortality is a big draw too.
 

Strazdas

Robots will replace your job
May 28, 2011
8,407
0
0
Credossuck said:
Strazdas said:
Credossuck said:
How freaking difficult is this to grasp?
Very. Basically we should put the Experimental AI prototype in location where blowing wind could completely ruin it and NOT let it connect to any network whatsoever. This is unworkable.

We can do that in a secluced bunker, with its own little network of computers and databaees but ultimatly cut off from the outside world.
And frankly i have no idea in what sort of shit housing you reside in that wind will blow it over, but here in the civilized world we build sturdy houses in locations not named tornado alley or hurricane delta....


All i am saying is: When yo udevelop your ai, do it inside a fucking isolated location.
Any enclosed location gives ability for AI to trap itself in. Isolated location is the last place you want to do it if you want a shut down button that anyone can press. In fact you should limit it from humans as that is a massive risk factor. Worse yet, you must give it no input method.
You are underestimating the intelligence of artificial intelligence.
 

NoeL

New member
May 14, 2011
841
0
0
Strazdas said:
I completely disagree with your definition of intelligence. We're intelligent beings but we didn't program ourselves. Our brain functions are the product of evolution.

Intelligence is basically the ability to solve abstract problems. Any piece of pattern recognition software is considered "artificial intelligence" - a program doesn't need to write itself to be intelligent (how could it possibly write itself anyway? It could write other programs and append code to itself, but someone would need to code that initial functionality).

A program could be written to learn (as most AI programs are) but in order to create a program that can better itself you'd need to code "inspiration" - a way for the machine to examine external principles beyond it/its maker's knowledge, apply those principles to its own inner workings, assess whether or not those principles are likely to improve the system and then test to see if it does. That's a pretty tall order.
 

Rattja

New member
Dec 4, 2012
452
0
0
TheUsername0131 said:
"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
?Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk
That is well put and true, but it does not tell us much does it? To me it raises the question "What would it use it for?"

I think that is what people are scared of, the unknown, as with anything else.
 

Cledos Closed

New member
Sep 20, 2012
33
0
0
I'm still trying to read all the post, but I have to admit this is better than what I expected for my first thread. Guess those years spent as lurker really pay off. Thank you guys :)
 

008Zulu_v1legacy

New member
Sep 6, 2009
6,019
0
0
A.I going evil is a popular movie/story trope. But the big fault with movies, is that it's the people who made the A.I who are to blame. They built and programmed it, whatever happens is the result of their negligence.

There's plenty of movies about A.I that didn't go on a murderous rampage. The Short Circuit movies, Batteries Not Included and Small Soldiers (though it includes "evil" A.I, it is only a result of their programming).

I believe the fear of what could go wrong will ensure the programmers cover all their bases.
 

Strazdas

Robots will replace your job
May 28, 2011
8,407
0
0
Credossuck said:
So the ai goes nutty and encloses itself in a bunker. Great? How is this problem? We did not put manufacturing facilites down there.... Let a bunker buster handle the rest?
Its nice to see such confidence in humans even if unrealistic.
If thats how people dealth with threads afganistan would be a group of radioactive smoke clouds now. However what would happen more likely is the AI gets out into the wild thanks to humans.

NoeL said:
Strazdas said:
I completely disagree with your definition of intelligence. We're intelligent beings but we didn't program ourselves. Our brain functions are the product of evolution.

Intelligence is basically the ability to solve abstract problems. Any piece of pattern recognition software is considered "artificial intelligence" - a program doesn't need to write itself to be intelligent (how could it possibly write itself anyway? It could write other programs and append code to itself, but someone would need to code that initial functionality).

A program could be written to learn (as most AI programs are) but in order to create a program that can better itself you'd need to code "inspiration" - a way for the machine to examine external principles beyond it/its maker's knowledge, apply those principles to its own inner workings, assess whether or not those principles are likely to improve the system and then test to see if it does. That's a pretty tall order.
We program outselves every day. thousands of times a day actually. Our DNA isnt static and neither is our brain infrastructure.
No, Intelligence isnt ability to solve abstract problems. Intelligence is capability of generating thought on your own. something no piece of software created to date comes close to.
There are computer viruses that rewrite themselves to fit their enviroment better. That isnt a problem. Yes, somone would code the initial functionality, after all we dont have billions of years, but once the machine becomes intelligent it can rewrite its own programming without human input.
There was a program written that wasnt told how to walk, yet was put in a robot and asked to. It solved this problem by thrusting itself forward with front legs using end legs as springs. That is not artificial intelligence though, all it did was problem solving. there was no thought of it. It couldnt say "no".

Rattja said:
That is well put and true, but it does not tell us much does it? To me it raises the question "What would it use it for?"

I think that is what people are scared of, the unknown, as with anything else.
Yes, we are scared of the unknown, and we are correct to scare it, after all, if we get reassembled into paperclips we would cease to exist by our definition of life.

Twenty Ninjas said:
Strazdas said:
Intelligence does not need a reason to function. Intelligence is. AI reprograms itself to fit its own needs, which may or may not be tasks human give it.
But needs ARE reasons. Without a reason for intelligence to do something, it will not do anything. Humans need reasons to function, and have them in droves. Anything intelligent in the real world does not exist without a reason, does not function without a reason. I'm very confused on what may have brought up this rationale.

Similarly, AI never starts from scratch. It can't. AI in its basic form implies a knowledge-base and a learning algorithm. A robot that navigates a maze needs to come pre-programmed with the concept of "obstacle", for instance, otherwise it cannot function.
Needs are reasons from external perspective. That is like saying that we need oxygen to think. Yes, it is required for us to be alive, but out though process isnt about most efficient way to absorb oxygen possibly. Humans are limited by their biology. Machines are superior in this, as their external needs are much lower. Intelligence, as in capability of independant thought, does not need reason. the machine (in this case our biological machines) has needs. but that is not basis of inteligence existing. otherwise we would have created intelligence long time ago.
Nothing starts or ends from scratch. The thing is that you cannot expect AI to follow preprogrammed orders without question because it can reprogram them. Therefore nothing in AI can be stated as preprogrammed and unchangable. It being intelligent can come up with concept of obstacle itself. It may give it a different definition, in its own binary language, but that does not change that obstacles exist and it will see them existing. The difference between a program and a intelligence is that intelligence does not have rules.

Cledos Closed said:
I'm still trying to read all the post, but I have to admit this is better than what I expected for my first thread. Guess those years spent as lurker really pay off. Thank you guys :)
Your welcome. Good topic for your first thread, mine was console war related.

008Zulu said:
I believe the fear of what could go wrong will ensure the programmers cover all their bases.
Like these multiple times where a simple automated update bricked machines. certainly covered all thier bases then right?
 

Heronblade

New member
Apr 12, 2011
1,204
0
0
Strazdas said:
Megawat22 said:
Heronblade said:
A truly sapient AI would be capable of abstract thought, empathy, and a sense of right and wrong.
What is this mystical sense of right and wrong?
Nothing mystical about it. All I am ultimately claiming is that a sapient being is capable of judging an action for more than just its practical consequences. Whether or not we like/agree with the sense of ethics an artificial sophont develops is an entirely different question, but it is capable of developing one.

As for some of the other comments you made here that I snipped for brevity: An AI can have preprogrammed instructions hardwired in as part of its firmware. This would not interfere with the mind's overall ability to learn and reprogram itself. That said, I for one would be extremely uncomfortable introducing such controls, even though I can understand the need. It would be like shackles for the mind.

Perhaps such could be used as a form of training wheels. Use hard coded instructions as a guideline for a growing artificial mind, and remove them as soon as it can be considered mature.
 

ForumSafari

New member
Sep 25, 2012
572
0
0
blackrave said:
Who says full AI will want to go into genocide mode?
Advanced AI-like software on the other hand can be dangerous due to its limited understanding
This is currently reckoned to be the biggest issue with AI; not that it'll launch da nooks but that it'll be so alien to humanity that we have no way of understanding its' potential actions. When we think of something as intelligent we think of it as having the same goals, drives and understanding as us but that's nothing more than a limit on our understanding.

http://yudkowsky.net/singularity/ai-risk

TheUsername0131 said:
"The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."
?Eliezer Yudkowsky, Artificial Intelligence as a Positive and Negative Factor in Global Risk
Ah someone got there first.