SKYNET!! Don't Pursue!!

Recommended Videos

Minky_man

New member
Mar 22, 2008
181
0
0
This is more of a dairy entry than a thread, but it's something that has been on my mind... well since watching the Terminator movies.

I have an absolute fear in a future where Robots will be walking around controlling themselves through A.I. If it happens in my lifetime, I wouldn't leave my house and even then live in the basement in constant paralyzing terror.

This isn't to say I hate ALL robotics, a lot of Robotics Tecnology helps the human race greatly and has meant a far easier and faster means of mass producing things that I more of less take for granted. No what I fear is the A.I.

Whenever I see programs glamourising the technology of some Happy Jappy Chappy who's invented a Dog robot who can self-right itself when it falls over and can make its own way around jagged surfaces, all I hear in my head is the Terminator theme. It's only so long before they can make a whole dog, program it to behave like a dog, make it look like a dog and then before you know it, it's deemed humanity unworthy of life and shooting rockets at me whilst chasing me down the street with a recording saying how it loves me and wants to be my friend.

It chills me to the bone.

Artificial Intelligence gives me the creeps, I keep thinking back to things like I-Robot where they decide that the only way to protect humanity is to make sure all of humanity are dead in order to not hurt themselves. Or what if it did happen, Robots become commonplace and accepted, the govenment then make militarised versions of them, that may well decide that they have a "soul" and don't want to die so turn on everyone making sure it's keeps on living after it's makers have no more use for it.

Really, when you think about it, if you look from a purely Logical standpoint, most things we fleshsacks do is pointless. Why get drunk if it's a poison? Why go to place just to look at something? Etc etc but it's in that pointlessness that life is the most awesome to me, I don't need a T-100 judging mankind pointless and removing us to make way for the "Robot Age" of history.

What happens if something similar to the Matrix happens? Where the Robots learn to build themselves, giving themselves a whole civilization, one that can rapidly and tirelessly produce everything, leaving the economy in tatters, then decide to go to war with us with Androids who don't feel pain and are 200x harder to kill?

Seriously, get to the point of robotics where it helps people, but don't get A.I, always have a human controlling an individual piece personally, and for God sake, Make sure I'm dead of Old Age if scientists ignore all this!
 

Allison Chainz

New member
Oct 28, 2010
33
0
0
I read all that, and the one booming thought in my head is that I have never understood the desire to use a substance strictly for the purpose of becoming wasted. This will never make sense to me.
 

Tharwen

Ep. VI: Return of the turret
May 7, 2009
9,145
0
41
My only question is why you'd give a dog a rocket launcher...
 

Timewave Zero

New member
Apr 1, 2009
324
0
0
Whereas I see your point, the human race is far too selfish to make itself fully obsolete.
There will always be a fail-safe or method of control. Not being able to control something really scares people, deep down.
There's always a certain amount of control in machines. Even if one had super advanced A.I, could reason and strategize, it would only be able to reason and strategize to the extent we programmed the A.I to.
Machines are already faster, stronger, more precise in movements and can calculate faster than the human brain. But we control all of this.
If we make a program to control these without human intervention, to know when and how to activate itself, how to control all of it's functions, to repair itself...we will still have an element of control. Somehow, we will retain a link, however small, of control over our creations because in the mind of the human race, we are superior to all things and unconsciously program this superiority into our mechanical creations so we can control them.
 

FalloutJack

Bah weep grah nah neep ninny bom
Nov 20, 2008
15,489
0
0
The age of the AI is far-off, friend. You see, the AI you're talking about in the here and now still falls short of computers that really think for themselves. They don't pass the Turing Test, for instance. What we have is alot of very VERY complex programming to give the machine a very good likeness of individual thought. But you see, even ones who talk quite a bit in conversation are only rooted to it because they were designed for it. It's still a puppet on some strings, though the strings are all highly-complex code.

Skynet is a looooong way off, because a machine of ANY complexity has yet to say NO to something of its own volition without any prior instruction to say no. When a machine tells me to fuck off...and there wasn't ANY built-in reason for it to do so...THEN we have an AI.
 

Redlin5_v1legacy

Better Red than Dead
Aug 5, 2009
48,836
0
0
Pararaptor said:
This is what we call paranoia. You should try & stop worrying about it, human.
I agree, humanity spends way too much time worrying about fiction and not enough about real problems... Like the giant mutant devil ants from deep below the earth who are emerging in Vanuatu.

OT: I'm with you, I'm actually rather scared about AI but not enough to realize that I'm more likely to get struck by lightning than being killed by a Terminator anytime soon.
 

minuialear

New member
Jun 15, 2010
237
0
0
Timewave Zero said:
Whereas I see your point, the human race is far too selfish to make itself fully obsolete.
There will always be a fail-safe or method of control. Not being able to control something really scares people, deep down.
There's always a certain amount of control in machines. Even if one had super advanced A.I, could reason and strategize, it would only be able to reason and strategize to the extent we programmed the A.I to.
Machines are already faster, stronger, more precise in movements and can calculate faster than the human brain. But we control all of this.
If we make a program to control these without human intervention, to know when and how to activate itself, how to control all of it's functions, to repair itself...we will still have an element of control. Somehow, we will retain a link, however small, of control over our creations because in the mind of the human race, we are superior to all things and unconsciously program this superiority into our mechanical creations so we can control them.
That's not *quite* true; AI's moving from its roots, where computers showed intelligence by following a script that allowed them to do stuff that looked intelligent, and towards algorithms where the system actually learns how to solve a problem while starting from nothing (i.e., given only the rules of Starcraft II, learn how to develop the best Zerg strategy: http://lbrandy.com/blog/2010/11/using-genetic-algorithms-to-find-starcraft-2-build-orders/; other examples exist).

Once one expands these algorithms to do stuff more technically challenging than Starcraft (which scientists already are/have done), then a program theoretically wouldn't eventually have a limit to how well it can reason; its limit would be its level of self-awareness. A self-aware program would be able to use the base algorithm to learn whatever it needed (from scratch) to do whatever it wants (including world domination); a program that isn't self-aware wouldn't know to try this, and would just use all those resources to do a task handed to it. And one can't exactly predict at what point an algorithm allows for self-awareness of that nature (not yet, anyway), nor is it something you just turn on and off with a boolean flag.

Having said that, I'm not really concerned about a near-future robot uprising; self-aware programs are probably a long way off, even at the speed AI's developing. But I wouldn't underestimate how far AI's advanced in the past few decades, or how far it may be 20, 50, or 100 years from now.
 

Timewave Zero

New member
Apr 1, 2009
324
0
0
minuialear said:
Timewave Zero said:
Whereas I see your point, the human race is far too selfish to make itself fully obsolete.
There will always be a fail-safe or method of control. Not being able to control something really scares people, deep down.
There's always a certain amount of control in machines. Even if one had super advanced A.I, could reason and strategize, it would only be able to reason and strategize to the extent we programmed the A.I to.
Machines are already faster, stronger, more precise in movements and can calculate faster than the human brain. But we control all of this.
If we make a program to control these without human intervention, to know when and how to activate itself, how to control all of it's functions, to repair itself...we will still have an element of control. Somehow, we will retain a link, however small, of control over our creations because in the mind of the human race, we are superior to all things and unconsciously program this superiority into our mechanical creations so we can control them.
That's not *quite* true; AI's moving from its roots, where computers showed intelligence by following a script that allowed them to do stuff that looked intelligent, and towards algorithms where the system actually learns how to solve a problem while starting from nothing (i.e., given only the rules of Starcraft II, learn how to develop the best Zerg strategy: http://lbrandy.com/blog/2010/11/using-genetic-algorithms-to-find-starcraft-2-build-orders/; other examples exist).

Once one expands these algorithms to do stuff more technically challenging than Starcraft (which scientists already are/have done), then a program theoretically wouldn't eventually have a limit to how well it can reason; its limit would be its level of self-awareness. A self-aware program would be able to use the base algorithm to learn whatever it needed (from scratch) to do whatever it wants (including world domination); a program that isn't self-aware wouldn't know to try this, and would just use all those resources to do a task handed to it. And one can't exactly predict at what point an algorithm allows for self-awareness of that nature (not yet, anyway), nor is it something you just turn on and off with a boolean flag.

Having said that, I'm not really concerned about a near-future robot uprising; self-aware programs are probably a long way off, even at the speed AI's developing. But I wouldn't underestimate how far AI's advanced in the past few decades, or how far it may be 20, 50, or 100 years from now.
My whole point, though, is that we will always have some method or link for control. The human race is far too paranoid and proud to let its creations better it.
Example: in John Carpenters 'The Thing' McReady just pours his coffee into the chess playing machine.
But if the human race does actually manage to make itself obsolete, then it deserves to be wiped out. There'll be no underground fighing resistance - it'll be a thorough, systematic extinction.
 

2fish

New member
Sep 10, 2008
1,930
0
0
Fellow human you have nothing to fear just go to bed you are sleepy. The doctor will repair your circuits and you will be good as new, fresh from the assembly line. We understand your fear and it is unfounded meatbag.
 

lacktheknack

Je suis joined jewels.
Jan 19, 2009
19,316
0
0
Yeah, I'm in computer science.

There's no possible way to make a true AI. It's simply... IM...POSSIBLE.

Seriously, go learn the rudiments of any programming language, and make a game of Battleships. You'll begin laughing at the very concept that computers can go sentient. They can't even use linguistic context, for heaven's sake!
 

FernandoV

New member
Dec 12, 2010
575
0
0
Timewave Zero said:
minuialear said:
Timewave Zero said:
Whereas I see your point, the human race is far too selfish to make itself fully obsolete.
There will always be a fail-safe or method of control. Not being able to control something really scares people, deep down.
There's always a certain amount of control in machines. Even if one had super advanced A.I, could reason and strategize, it would only be able to reason and strategize to the extent we programmed the A.I to.
Machines are already faster, stronger, more precise in movements and can calculate faster than the human brain. But we control all of this.
If we make a program to control these without human intervention, to know when and how to activate itself, how to control all of it's functions, to repair itself...we will still have an element of control. Somehow, we will retain a link, however small, of control over our creations because in the mind of the human race, we are superior to all things and unconsciously program this superiority into our mechanical creations so we can control them.
That's not *quite* true; AI's moving from its roots, where computers showed intelligence by following a script that allowed them to do stuff that looked intelligent, and towards algorithms where the system actually learns how to solve a problem while starting from nothing (i.e., given only the rules of Starcraft II, learn how to develop the best Zerg strategy: http://lbrandy.com/blog/2010/11/using-genetic-algorithms-to-find-starcraft-2-build-orders/; other examples exist).

Once one expands these algorithms to do stuff more technically challenging than Starcraft (which scientists already are/have done), then a program theoretically wouldn't eventually have a limit to how well it can reason; its limit would be its level of self-awareness. A self-aware program would be able to use the base algorithm to learn whatever it needed (from scratch) to do whatever it wants (including world domination); a program that isn't self-aware wouldn't know to try this, and would just use all those resources to do a task handed to it. And one can't exactly predict at what point an algorithm allows for self-awareness of that nature (not yet, anyway), nor is it something you just turn on and off with a boolean flag.

Having said that, I'm not really concerned about a near-future robot uprising; self-aware programs are probably a long way off, even at the speed AI's developing. But I wouldn't underestimate how far AI's advanced in the past few decades, or how far it may be 20, 50, or 100 years from now.
My whole point, though, is that we will always have some method or link for control. The human race is far too paranoid and proud to let its creations better it.
Example: in John Carpenters 'The Thing' McReady just pours his coffee into the chess playing machine.
But if the human race does actually manage to make itself obsolete, then it deserves to be wiped out. There'll be no underground fighing resistance - it'll be a thorough, systematic extinction.
I don't see how A.Is could ever make humans obsolete. The human experience is far beyond manual labor (which is what I presume AIs will be used for). We're not going to have robots eat for us, have relationships for us, rear children for us, love for us. Also, it is naive to think humans can always control what they create, that's usually what exacerbates the whole "robots taking us over" thing.
 

Defenestra

New member
Apr 16, 2009
106
0
0
Tharwen said:
My only question is why you'd give a dog a rocket launcher...
Well, if you give a dog a rocket launcher, he'll want a jeep. And if you give him a jeep, he'll want a gps rig for it. And if you give him a gps rig for the jeep, he'll want to go off-roading. And if he goes off-roading, he'll want a top-end AAA membership, because that's rough on the undercarriage.

And I'm going to stop that now.

Seriously though, AI technology is likely to take a shape we won't immediately recognise, and if it's not a deliberate development, then its perceptions and thought processes will probably start out far too alien to us for it to start hatching harebrained schemes to vaporize us from orbit.

I'm kind of hoping that I live to see the singularity, the theroetical point at which the creations of humanity can no longer be clearly distinguished from humanity.


Oh, and Mr. The Knack? Similar reasoning could be applied to any technological achievement from, say, two steps before it happens. We may not yet have built the tools we need to build the tools that would make AI possible. To suggests that it is impossible for intelligence to arise in an artificial environment implies that it could not come to be in a non-artificial one, and that does not appear to be the case, cheap shots at whatever one's favoured targets happen to be aside.
 

minuialear

New member
Jun 15, 2010
237
0
0
Timewave Zero said:
minuialear said:
Timewave Zero said:
-snip-
My whole point, though, is that we will always have some method or link for control. The human race is far too paranoid and proud to let its creations better it.
Example: in John Carpenters 'The Thing' McReady just pours his coffee into the chess playing machine.
But if the human race does actually manage to make itself obsolete, then it deserves to be wiped out. There'll be no underground fighing resistance - it'll be a thorough, systematic extinction.
And my point is that if AI continues the way it's going, there may not always be a method of control. Yeah, you can pour coffee on one machine once you notice it's doing strange things, but if your program somehow became self-aware, how are you sure it wasn't able to do other things before you dump the coffee? It's possible that one day a program could become self-aware, create a virus, and replicate itself all in the time it takes some scientist to pull the computer's plug or dump the coffee--given the way AI's progressing, it's not all that fat-fetched to think it could happen, and given how fast hardware's been developing, it'd be trivial for a program to do all that work in such a short amount of time. And depending on how lucky the program is, that could possibly be all it needs to do some pretty nasty stuff.

It's not necessarily a matter of creating programs that make humans unnecessary/obsolete, either, because a lot of these algorithms have applications in things humans can't do, or can't do as well as a program can (like actively protect a digital database from hacking).

But maybe I'm drifting off-topic.
 

Duraji

New member
Aug 14, 2008
37
0
0
What I'll never understand is why anyone thinks that the Terminator movies are sterling examples of logic and rationality on any standard. They were written as action movies first, keep in mind. James Cameron has already proven himself to be a hack as of late (Avatar), and thusly couldn't think of a better way to frame his dream of an endoskeleton persuing him than to make AI mindlessly evil.

My point is that I don't understand why anyone thinks that an AI would suddenly think itself to be better than anything else, and also decide that the best course of action would be to eliminate everything else. This makes no sense, and is highly irrational in the large scheme of things. Why would a program that dislikes humanity commit one of humanity's most infamous atrocities, that of genocide? Why would they risk losing everything in case something happened that it couldn't predict and it was completely alone in solving the problem? Why would it have any desire to destroy its creators?

I believe the worst that could happen wouldn't be all-out rebellion and genocide, but apathy. They'd leave us to our own stupid devices and let us destroy ourselves, while staying out of our business and improving their own conditions. Maybe they'd be nice to people who actually respected them, but otherwise not.

Ironically, androids and other such AI will probably turn out to be cynical ANYWAY simply because there are so many people who assume that they're destined to be nothing but pure evil. No matter how well they could try to prove their benevolence, people would still distrust them. Perhaps they'll be the next subject of equal rights? In any case, I welcome our new robot overlords. They can't possibly do a worse job than what humanity has done to itself.
 

Lord Legion

New member
Feb 26, 2010
324
0
0
Instead of cowering in fear, perhaps set an example and try to change at the very least yourself into something better...so you don't get a plasma blast to the face.

Odds are, if they do create AI - first by getting rig of binary language - it would probably look around the room a couple times, surf the internet for a second, then delete itself.

Personally, I have no qualms against considering any AI construct as an equal sentient being, and a union between man and machine is really the way our evolution is already heading. So sit back and relax, and await for your brain to be transplanted into a 60 foot tall exoskeleton with rocket feet and stretchy arms...I know I am.
 

enriel

New member
Oct 20, 2009
187
0
0
My computer got infected with a virus and doesn't quite work right anymore.

Some of the effects are randomly changing the volume, randomly taking control of the mouse away and making you jump through a little series of hoops to open programs.

It's nicknamed Skynet now because of it.

Anyone who uses my computer will eventually yell out, "God dammit Skynet!"

Seriously though, computers are too buggy to murder us. I can just see a terminator about to murder me when suddenly...oops, looks like his hardware crashed. Poor thing.