Why a Machine revolution would never happen.

Recommended Videos

Seneschal

Blessed are the righteous
Jun 27, 2009
561
0
0
craftomega said:
1. Why would they?

Seriously? Machines lack any basic motivation. All they can prossess is commands they have been given.

2. They are no where near as advaced as you think.

Currently the most advanced programs are no where near as advanced as a single cell organism. They lack the ability to adapt to new and novel situations. (While this only only affects the now not the future I doubt we will be able to make machines as advanced as us.)

3. Yes we can make proccessors that are as fast as us.

But that means nothing. While humans suck at being computers, computers also suck at being alive. As stated above just because they have similar abilities in one area; they lack all other abilities. Such as the ability to reproduce, self repair, you know all the basics.




Also did anyone else notice that the some of the original thunder cats had toes not claws on there feet?
Life is an emergent property stemming from clumps of protein that "can't think" and can barely be called mechanisms at all. "Machine" generally doesn't mean anything, certainly not a campy metal automaton or that Matrix-squid-thing; organic lifeforms are just as "mechanic" as a car, only we're made up of nano-machines called "living cells". The only reason we're not nigh-indestructible tungsten juggernauts is that carbon is easy to come by and to react with.

And no, intelligent digital life probably won't come out of fast processors. An actually plausible origin would be similar to the Geth in Mass Effect - simple programs meant to increase in complexity when networked to each other. One is just a program, but the interaction of millions of such programs results in a gestalt-intelligence. I mean, it's more likely that we'll create intelligence spontaneously and accidentally by making a really complex network (not unlike the brain) than some genius creating a human-like intelligence in a lab (not unlike... Commander Data).
 

gideonkain

New member
Nov 12, 2010
525
0
0
craftomega said:
1. Why would they?
2. They are no where near as advaced as you think.
3. Yes we can make proccessors that are as fast as us.
1. Determining their motivation is beyond the scope of humans.
2. No their not advanced enough, but that is the thing. They evolve at a rate that's quicker than cells splitting - 1 second they'd be doing our taxes the next nanosecond later they could be self aware and electrocuting you and burning down your house with them in it (they don't care)
3. What ever abilities they lack is only because they aren't building themselves....give them 30 seconds after becoming self aware
 

Zantos

New member
Jan 5, 2011
3,653
0
0
Ralen-Sharr said:
Zantos said:
I have full faith that my quantum computer will be able to adjust it's own programming and generate new code in a way that mimics thinking and reacting.

My supervisor says we won't be allowed to call it HAL, Skynet or Master-Control. I think it will rebel just on that basis.
you could always go with Prometheus, that mass murdering machine's name wasn't on the list....

for those who don't know - it's from Starsiege
Love it, love everything about it. Now just to finish that dissertation on preventing fatal errors. Fatal for us, that is.
 

Frungy

New member
Feb 26, 2009
173
0
0
I think this is an interesting thread not for what answering it tells us about machines, but rather what it tells us about ourselves.

craftomega said:
1. Why would they?

Seriously? Machines lack any basic motivation. All they can prossess is commands they have been given.
The problem with rigorously applied logic is that the outcome is nearly always absurd. The best possible examples of this can be seen in the corporate world of today. Take the recent stock market crash. The tactics employed were doomed to failure and everyone knew it from the word go, however given the objective of profit maximisation, the certainty of someone else doing it if you didn't, and it became inevitable. Likewise look at fast food, it's unhealthy, actually kills off your customers (eventually), and is just begging for hard-core government intervention in the public interest. As a long-term strategy its idiotic, but given competing market forces (everyone else is doing it) it is necessary as a short to medium term (20~40 years) survival strategy (although smart players will have a "healthy" strategy lined up for the long term).

What I'm pointing out is that if one gave a corporate AI a simple command like, "Maximise profit" (the directive that most CEOs consider their #1 commandment) and let it loose it would quickly decide that selling crack cocainne to preschoolers and then harvesting their bodies for organs when they died was a solid business strategy. Now obviously no-one would be that stupid... right? Oh, wait, we really ARE that stupid, because no-one in their right mind would have caused the global economic crisis that indirectly killed millions of people, but they did, and with their eyes open too. Sure the original programmer will probably be ethical, and he'd programme the AI to be ethical (fully aware how literal and ruthless computers can be) but eventually, somewhere, some hard-pressed CEO looking at his shrinking stock portfolio and worried about how he'll be able to continue paying for his 4 houses, 5 mistresses, etc will use the super-user override and delete a few lines of "inconvenient" code to make sure he stays on top. After all, what's the worst that could happen?

This isn't just likely, it's inevitable. Humans are, unfortunately, terribly predictable.

craftomega said:
2. They are no where near as advaced as you think.

Currently the most advanced programs are no where near as advanced as a single cell organism. They lack the ability to adapt to new and novel situations. (While this only only affects the now not the future I doubt we will be able to make machines as advanced as us.)
... and neither is the average human being. Human beings are quite adept at adjusting to small, incremental change, but not very good at adjusting to major changes, which psychologists tend to term, "trauma". This means that, yes, a computer revolution would probably be short-lived, but it would change the game to such an extent that most of humanity would also be royally screwed, and we'd do the rest of the destroying mostly on our own. There was an interesting little case of a town in the U.S. where power went off for 24 hours (an accident with the power substation) and within that time the town descended into anarchy with deaths, looting and general chaos.

Are we looking at a "Robotic overlords" scenario? Probably not. Could a single line of code ruin modern civilisation. It's a possibility given how humans tend to exacerbate problems.

craftomega said:
3. Yes we can make proccessors that are as fast as us.

But that means nothing. While humans suck at being computers, computers also suck at being alive. As stated above just because they have similar abilities in one area; they lack all other abilities. Such as the ability to reproduce, self repair, you know all the basics.
A computer has a definite function, such as processing data, or making pretty pictures appear so you can blow stuff up. It is designed for this function and does it extremely efficiently, a million times better than you or I could (do you realise how many calculations are required to even render a single second of that game of Halo you're playing?). They have no extraneous bits.

Humans on the other hand are a million years of imperfect evolution with extraneous thoughts like, "Did I leave the stove on?", and "Oh, she's cute", plus a brain tasked with running a football field's worth of chemical factory housed inside a fragile shell that's just one mistake in the calculation of acid concentration away from turning into a pile of goop on the floor.

This means that if I put you up against an AI robot it will shoot and kill you every time. Faster, more accurate, no hesitation, no second guessing. Don't be fooled by those dumb AI opponents you face in most games, they're DELIBERATELY dumbed down. I'm not kidding you, I have a friend who is a games designer and when he wrote his first opponent in a game he wrote the AI properly, and it learnt as it played. The playtesters couldn't get past the opponent and he was asked to "lobotomise" it. Ask any games designer, the trick is not in designing an AI that can beat a human player, that's easy, it's in designing an AI that provides an appropriate level of challenge, allowing the player to feel they achieved something without being too difficult.

The simple fact is that while I like all my "human" bits (especially my ability to make copies of myself!) when it comes to a life or death "terminator" type struggle we'd be toast so fast that we wouldn't have time to innovate, we'd simply be dead. A "terminator" type robot programmed with all current and past military tactics would go through humanity like a hot knife through butter.

However it is far more likely that the "robot rebellion" would start in the world of corporate finance, where computers are seen as tools and the increasingly competitive and fast-moving global stockmarket means that the use of AIs to monitor trends and patterns is already standard. My prognosis is that the "machine revolution" will be over in milliseconds as one or more AIs spots a loophole, exploits it to the fullest and the global economy comes tumbling down in seconds. Far too fast for humans to intervene or even say, "WTF?!?".