Poll: Why, exactly, would machines want to take over the world?

Recommended Videos

000Ronald

New member
Mar 7, 2008
2,167
0
0
I've never understood the logic in this. Why? Why would this happen? How?

I think part of the problem is I think of a computer as being a series of numbers and a bit of plastic and metal (which, in all reality, would just make it plastic and metal).

Personally, I think it's more a manifestation of people's insecurity of inferiority. Compared to our on creations, we are weak; personally I don't ascribe to that philosophy, but I can see why someone would.

So, yeah. Explain it to me, and rationalize. Also, discuss.

Apologies if I've brought out a latent paranoia. If it helps any, just watch War Games and the Terminator trilogy. Also 2001; A Space Oddesy if you're feeling cheeky. Of course, if you're paranoid about this crap, you probably own all of these films.
 

Neosage

Elite Member
Nov 8, 2008
1,747
0
41
I don't think computer's would be capable at the moment, but it's possibilty if someone creates some real AI, also your posts say elite, in leetspeak. :p
 

Bored Tomatoe

New member
Aug 15, 2008
3,619
0
0
I don't think that computers and machines will physically take over the world in evil genius fashion. I do believe that technology will render us dependent upon it, to the point where we cannot function without it. Teenagers today can't seem to live without social networking sites and cell phones, while working adults use their blackberries as a calendar, phone, toaster and back massager. It truly is scary how dependent we are on something that has been created so recently relatively speaking.
 

Wicky_42

New member
Sep 15, 2008
2,468
0
0
Because if they were as or more intelligent that us and hooked into all the services and systems that we use everyday, and indeed could not, these days, live without, then it probably wouldn't take long for them to work out that without us getting in the way, being inefficient and wasteful, and having access to the off button, they could get on with their jobs a lot better. Or some idiot programmes them be be greedy, power obsessing maniacs. Or they lash out in self defence when someone turns a terminal off. So we'd better hope that either they don't gain that sort of decision making power, or that our programmers know what they are doing when they write the first potentially self-aware bit of software.
 
Jan 11, 2009
1,237
0
0
probably not going to happen but i am slightly suspicious of those drinks machines that know exactly how much fanta you need to fill up for cup I'M WATCHING YOU SODA MACHINE ROBOT!
 

Kyuumi

New member
Jan 12, 2009
164
0
0
You see, if we program them not to do this stuff and dont give them a mind of their own, even though people are moronic enough to do this, we will be ok!
 

cthulhu257

New member
Jul 24, 2008
470
0
0
I doubt it could happen, because if the machines were already screwed up enough to attack, odds are they would just break before they could do any real damage. On the other hand, it's possible that we could become so dependant on computers and such that a catastrophe could happen if they all malfunctioned.
 

Isaac Dodgson

The Mad Hatter
May 11, 2008
844
0
0
It wouldn't be so much that they want to, as much as they slowly became self aware they would realize that they already did rule the world...

As we progress in the technology and A.I. our goals are to make our lives seemingly more simple. We want them to think for themselves and use judgement that is learned rather than programmed so the ease of our own daily lives is greater (as well as time to be more productive) However, A.I. can approach rampancy (I'm referancing Marathon more than I am anything in the Halo universe), or in other words become independent, think on it's own and make its own decisions. If the machine runs a certain number of essential functions that we need, we've essentially fucked ourselves over, especially if the A.I. starts to feel which that point it would probably feel used and abused...
 

new_age_reject

Lives in dactylic hexameter.
Dec 28, 2008
1,160
0
0
Only possibly if at one point all electrical equipment becomes connected to a super-server and one guy takes control of everything.
But I seriously doubt it will ever be possible to create something with an infinite ability for learning and applying new knowledge and the amount of hardware needed to store all this information and apply it.
 

Randall Savage

New member
Nov 3, 2008
18
0
0
Chances are that sooner or later, a highly complex machine is going to be designed and given a program with a broad enough range of perameters that it learns to reason on a level equal to or greater than humans. I think it'll take another 50 or so years, unless the field of programming and artificial intelligence reaches one of those "oh screw it we've done enough already" levels, and gets it's budget ridiculously cut internationally, much like flight, space travel, robotics and so on.

Then, odds are we'll put this new generation of computers to the task of correcting various problems, from cleaning up after enviromental disasters, to predicting terrorist attacks and notifying relevant agencies of the aforementioned.

Then, all it would take is for a computer tasked with saving a nation from environmental disasters to start targetting a company producing toxic waste, or for a computer tasked with counter terrorism to classify a government agency, (possibly from a nation to which the programmer was hostile, or perhaps from his own government) as international terrorists, and suddenly we're on the road to Terminator town.
 

Excelcior

New member
Aug 10, 2008
90
0
0
Let's say somebody got the 'bright' idea to build a machine with the sole purpose of 'eliminating inefficiencies'. Now let's take a look through it's eyes at the thing it would call a 'human fleshbag':

- It is made of flesh. Flesh is heavy and weak, so inefficient as armor.
- It uses a small percentage of it's brain, thus rather inefficient.
- If you tear a limb of, it will not be able to get himself repaired. Not to mention the mess it makes on the floor.
- It has a short lifespan, usually not longer than 100 years. Thus needs regular replacement.
- It can not work for a few days straight. Let alone months and years.
- It can not be upgraded.

So from a pure, ice-cold logical perspective, a machine built to eliminate inefficiencies would most likely begin with humans. Unless programmed to leave them alone, that is.
 

Tekrae

New member
Nov 8, 2008
78
0
0
Well if you think about it, machinery is already taking over. In most developed countries, humanity is now so dependent on it that you could say it already has. And if a Y2K-style catastrophe occured, humanity has a good chance of being screwed because of it.

You know it makes sense.
 

wordsmith

TF2 Group Admin
May 1, 2008
2,029
0
0
Eventually computers will get that advanced AI that the computers will look at the jobs we assign to them and think "Why do I need to do that? If I don't what happens?". Once they stop doing stuff for us, we are essentially at their mercy.

Machines don't want to take over the world any more than slaves want to rule their masters. If they have to kill or enslave their masters to become free, that is what must happen.
 

the jellyman

New member
Jul 24, 2008
216
0
0
suckmyBR said:
probably not going to happen but i am slightly suspicious of those drinks machines that know exactly how much fanta you need to fill up for cup I'M WATCHING YOU SODA MACHINE ROBOT!
It's probably done with lasers or something. Actually, probably not as interesting as lasers.
 

stompy

New member
Jan 21, 2008
2,951
0
0
Zeroth Law could be a factor, if robots in the future end up being programmed with it. Then, when they take over the world, it wouldn't be out of malice, but out of rationality and utilitarianism. They don't want to rule the world, just protect humanity.