SKYNET!! Don't Pursue!!

Recommended Videos

Minky_man

New member
Mar 22, 2008
181
0
0
enriel said:
Seriously though, computers are too buggy to murder us. I can just see a terminator about to murder me when suddenly...oops, looks like his hardware crashed. Poor thing.
Or on the flip side, whilst the Android is making you toast, suddenly your mobile phone rings, distrupts it's signals and believes you're a rat and procedes to beat the ever loving hell out of you, all to the sound of the Nokia ringtone! =P

Yes my fears are generally irrational, and to be fair, most of what I know has been taken from Theories and ideas and films, rather than having a supreme knowledge in coding and computer science. It could be debated (heavily mind you) that maybe that's (sort of) a good thing, if I've been taught that this is the way computer science is, here's a check list of what's been done, what's possible and what's impossible, it does lead to being almost brainwashed into thinking that everything is bound by that check-list.

Should I start creating bunkers and electromagnetic fields around myself just in case? Hell no! But I am allowed to worry about it, so long as it doesn't effect my current lifestyle in anyway
 

Timewave Zero

New member
Apr 1, 2009
324
0
0
FernandoV said:
Timewave Zero said:
minuialear said:
Timewave Zero said:
Whereas I see your point, the human race is far too selfish to make itself fully obsolete.
There will always be a fail-safe or method of control. Not being able to control something really scares people, deep down.
There's always a certain amount of control in machines. Even if one had super advanced A.I, could reason and strategize, it would only be able to reason and strategize to the extent we programmed the A.I to.
Machines are already faster, stronger, more precise in movements and can calculate faster than the human brain. But we control all of this.
If we make a program to control these without human intervention, to know when and how to activate itself, how to control all of it's functions, to repair itself...we will still have an element of control. Somehow, we will retain a link, however small, of control over our creations because in the mind of the human race, we are superior to all things and unconsciously program this superiority into our mechanical creations so we can control them.
That's not *quite* true; AI's moving from its roots, where computers showed intelligence by following a script that allowed them to do stuff that looked intelligent, and towards algorithms where the system actually learns how to solve a problem while starting from nothing (i.e., given only the rules of Starcraft II, learn how to develop the best Zerg strategy: http://lbrandy.com/blog/2010/11/using-genetic-algorithms-to-find-starcraft-2-build-orders/; other examples exist).

Once one expands these algorithms to do stuff more technically challenging than Starcraft (which scientists already are/have done), then a program theoretically wouldn't eventually have a limit to how well it can reason; its limit would be its level of self-awareness. A self-aware program would be able to use the base algorithm to learn whatever it needed (from scratch) to do whatever it wants (including world domination); a program that isn't self-aware wouldn't know to try this, and would just use all those resources to do a task handed to it. And one can't exactly predict at what point an algorithm allows for self-awareness of that nature (not yet, anyway), nor is it something you just turn on and off with a boolean flag.

Having said that, I'm not really concerned about a near-future robot uprising; self-aware programs are probably a long way off, even at the speed AI's developing. But I wouldn't underestimate how far AI's advanced in the past few decades, or how far it may be 20, 50, or 100 years from now.
My whole point, though, is that we will always have some method or link for control. The human race is far too paranoid and proud to let its creations better it.
Example: in John Carpenters 'The Thing' McReady just pours his coffee into the chess playing machine.
But if the human race does actually manage to make itself obsolete, then it deserves to be wiped out. There'll be no underground fighing resistance - it'll be a thorough, systematic extinction.
I don't see how A.Is could ever make humans obsolete. The human experience is far beyond manual labor (which is what I presume AIs will be used for). We're not going to have robots eat for us, have relationships for us, rear children for us, love for us. Also, it is naive to think humans can always control what they create, that's usually what exacerbates the whole "robots taking us over" thing.
Not control everything they make but the hypothetical self-aware machines.
Actually, there was a post that quote dme which makes quite a lot of sense by minuialear. And from what he/she has said, I can't argue that much. All I can say is that the human, when it makes, actually invents something that outwit it, then the human race has become obsolete and deservs to die for being so stupid, for not thinking ahead for the possible probelms, even if they are far-fetched, like a machine becoming self aware.
 

Timewave Zero

New member
Apr 1, 2009
324
0
0
minuialear said:
And my point is that if AI continues the way it's going, there may not always be a method of control. Yeah, you can pour coffee on one machine once you notice it's doing strange things, but if your program somehow became self-aware, how are you sure it wasn't able to do other things before you dump the coffee? It's possible that one day a program could become self-aware, create a virus, and replicate itself all in the time it takes some scientist to pull the computer's plug or dump the coffee--given the way AI's progressing, it's not all that fat-fetched to think it could happen, and given how fast hardware's been developing, it'd be trivial for a program to do all that work in such a short amount of time. And depending on how lucky the program is, that could possibly be all it needs to do some pretty nasty stuff.
All I can say is that the human race, when it makes, actually consciously invents something that can actively outwit it, then the human race has become obsolete and deservs to die for being so stupid, for not thinking ahead for the possible probelms, even if they are far-fetched, like a machine becoming self aware.
 

Blue_vision

Elite Member
Mar 31, 2009
1,276
0
41
Allison Chainz said:
I read all that, and the one booming thought in my head is that I have never understood the desire to use a substance strictly for the purpose of becoming wasted. This will never make sense to me.
Neither will it for me.
Oddly enough, I was kind of thinking that too...
 

Jedoro

New member
Jun 28, 2009
5,393
0
0
Give me two things (four if you include ammo) and I don't give a damn how advanced AI gets:

-Barrett M82A1 CQB with AP rounds
-Benelli M4 with depleted uranium 000 buckshot
 

Neuromaster

New member
Mar 4, 2009
406
0
0
Entertainment Media: 1
Computer Science: 0

I wish AI research was promising enough to make your fears a reality in my lifetime.

There're better things to be irrationally afraid of. Like genetically modified foods! Woooooooooo .
 

minuialear

New member
Jun 15, 2010
237
0
0
Duraji said:
My point is that I don't understand why anyone thinks that an AI would suddenly think itself to be better than anything else, and also decide that the best course of action would be to eliminate everything else. This makes no sense, and is highly irrational in the large scheme of things. Why would a program that dislikes humanity commit one of humanity's most infamous atrocities, that of genocide? Why would they risk losing everything in case something happened that it couldn't predict and it was completely alone in solving the problem? Why would it have any desire to destroy its creators?
Good point. We don't know what self-awareness would mean to a computer; just because we're prone to violence and rebellion, doesn't necessarily mean a computer would be, depending on the algorithms used to give it intelligence.

Timewave Zero said:
minuialear said:
All I can say is that the human race, when it makes, actually consciously invents something that can actively outwit it, then the human race has become obsolete and deservs to die for being so stupid, for not thinking ahead for the possible probelms, even if they are far-fetched, like a machine becoming self aware.
Fair enough. :p