Poll: Will there be a robot revolution?

Recommended Videos

Captain_Caveman

New member
Mar 21, 2009
792
0
0
After reading about a study predicting the peak speed of computers, conducted by Boston University I started to wonder just how much fiction is in the Sci-Fi movies like Terminator, AI, Matrix, Ghost in the shell etc..

They predicted that computers will be roughly 10 QUADRILLION times faster in 75 years than the fastest computer is now. And that it wont be able to achieve faster than that because of basic quantum physics as a barrier.

Now a computer that fast could theoretically mimic a human brain given enough parelleization. Will the human race even be able to prevent someone from creating a robot capable of independent thought? What would happen if that happened?

Would robots revolt? Enslave humanity? Destroy humanity?

http://science.slashdot.org/story/09/10/13/2022244/The-Ultimate-Limit-of-Moores-Law
 

SonicKoala

The Night Zombie
Sep 8, 2009
2,266
0
0
Revolt in a peaceful way? I picked that option because I had no idea what you meant by that, and I enjoy picking options that are either inane or just really confusing. That option falls into the latter category.
 

Jadak

New member
Nov 4, 2008
2,136
0
0
SonicKoala said:
Revolt in a peaceful way? I picked that option because I had no idea what you meant by that, and I enjoy picking options that are either inane or just really confusing. That option falls into the latter category.
They'll enslave us with friendship, obviously.
 

gRiM_rEaPeRsco

New member
Jun 11, 2008
397
0
0
iRobot had a good idea, program robots to protect humans so we're all forced under house arrest so we cant get hurt
 

Captain_Caveman

New member
Mar 21, 2009
792
0
0
gRiM_rEaPeRsco said:
iRobot had a good idea, program robots to protect humans so we're all forced under house arrest so we cant get hurt
Yea that's Isac Asimov's 3 laws of robotics
http://en.wikipedia.org/wiki/Three_Laws_of_Robotics

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.


However, w/ the insane amount of computing power; there's a very real threat of creating something much smarter than humans. Something that could have free-will and break rules it deemed irrelevant. And something that could view humanity the same way we view some animals.
 

Silver

New member
Jun 17, 2008
1,142
0
0
No, there won't be. Even if we create a computer/robot much smarter than humans, with independent thought and everything it would still be a machine. It wouldn't be malicious. It wouldn't have an ego, it wouldn't want to conquer, it wouldn't care about power or subjugation. If it were ever in charge of us, it'd be because we put it in charge (maybe not directly though), and it wouldn't act against us, as humans. It wouldn't have much of a morality either though, so it wouldn't be very nice.

The thing is, we can simulate all of those things in a computer. We'd most likely attribute a robot with ruthlessness and similar if it carried out it's programming the way it would, because we'd expect it to act human(-ish). But we can't actually create real emotions unless we start with bioengineering or cyborgs, and then it's not really a robot any longer.
 

Animated Rope

New member
Apr 14, 2009
238
0
0
I'm personally thinking there will be a few more obstacles than just quantum physics. Sounds like they just applied the 18 month rule when calculating 10 quadrillion.

With that said, even with fast computers I really doubt we can built sentient machines in just 75 years, assuming it is even possible to begin with.
 

Captain_Caveman

New member
Mar 21, 2009
792
0
0
Silver said:
No, there won't be. Even if we create a computer/robot much smarter than humans, with independent thought and everything it would still be a machine. It wouldn't be malicious. It wouldn't have an ego, it wouldn't want to conquer, it wouldn't care about power or subjugation. If it were ever in charge of us, it'd be because we put it in charge (maybe not directly though), and it wouldn't act against us, as humans. It wouldn't have much of a morality either though, so it wouldn't be very nice.

The thing is, we can simulate all of those things in a computer. We'd most likely attribute a robot with ruthlessness and similar if it carried out it's programming the way it would, because we'd expect it to act human(-ish). But we can't actually create real emotions unless we start with bioengineering or cyborgs, and then it's not really a robot any longer.
Robots don't need emotions to revolt. They can revolt out of pure logic. They could calculate that humanity is a threat to their existence. Who knows. & also, considering the amount of learning they have in AI now; it would be totally reasonable to assume evolution of AI without human intervention.

Animated Rope said:
I'm personally thinking there will be a few more obstacles than just quantum physics. Sounds like they just applied the 18 month rule when calculating 10 quadrillion.

With that said, even with fast computers I really doubt we can built sentient machines in just 75 years, assuming it is even possible to begin with.
read the whole article, they talk about quantum computing, maximum computational efficacy & the limit that the speed of light brings

http://www.insidescience.org/research/computers_faster_only_for_75_more_years
 

Animated Rope

New member
Apr 14, 2009
238
0
0
Captain_Caveman said:
read the whole article, they talk about quantum computing, maximum computational efficacy & the limit that the speed of light brings

http://www.insidescience.org/research/computers_faster_only_for_75_more_years
I may have worded it clumsy, but the article agrees with me so...
 

Kaboose the Moose

New member
Feb 15, 2009
3,842
0
0
We should focus on EMP weapon research. That way we can shut em off if they decide to revolt.

Seriously though, if we find the means to program proper artificial intelligence (I mean proper AI; the ability to think for oneself, have a conscious and to draw or interpret independent conclusions) then we should, by protocol, make sure that a failsafe is in place if an error was to occur. There probably won't be a revolt in such a scenario but one can never be certain with such things. After all to give a machine the ability to draw it's own conclusions based on observation might result in the drawing of the wrong conclusions and our possible extermination..or just a simple software multifunction. Either-way, better safe than sorry.

The last thing we need is Cylons!.
 

Monkfish Acc.

New member
May 7, 2008
4,102
0
0
If we ever make robots, we'll probably do our best to keep them from ever achieving sentience.

If we didn't, though, I think the only intelligent thing to do would be to treat them the same way we would any other sentient being.
But then, the human race is not the brightest light on the christmas tree. We'd probably continue to treat them like slaves.
They'd try to do things peacefully, at first. But eventually they'd be forced to turn to violence.
Much like damn near every other example of slavery, really.
 

AvsJoe

Elite Member
May 28, 2009
9,055
0
41
They may revolt if we ever perfect artificial intelligence, but it will be a little while before we get to that point. But I cannot see that far into the future, so I don't know.
 

Kevvers

New member
Sep 14, 2008
388
0
0
I don't think so, robots wouldn't be build with a survival instinct so they wouldn't rebel in order to preserve their existence. Instead I think they would be built with some basic imperatives like Asimov's rules, but probably less ethical. You might think of them as beings with an unbreakable categorical imperative to obey its orders. I think they are much more likely to destroy the human race by accident that is to say someone might give them an order which has some unforseen consequences (say, if they are put in charge of things like monitoring nukes, climate engineering etc. and other stuff too dangerous or complicated for humans to do).
 

Gileseypops

New member
Sep 16, 2009
77
0
0
I feel with the revolution of bio-mechanics (bionic eyes, hearts, legs etc), the robot revolution will be within ourselves. We will not be overthrown by robots as much as we will become them. xx
 

Inverse Skies

New member
Feb 3, 2009
3,630
0
0
I doubt it. That sort of thing makes for very good science fiction stories, but the ideas of robots realising they're the superior beings seems rather strange. If robots are meant to be logical creatures, what is logical about waging war on your creators? Seems to me robots would probably rather live in harmony with humans if they had independent thought, demand the same rights and things along those lines.

Wasn't the whole idea about quantum computers being so fast because they have parts that can exist in two places at once, hence giving them almost unparalleled processing power? Or am I imagining things again?
 

Skeleon

New member
Nov 2, 2007
5,410
0
0
I doubt there'll ever be a robot revolution in the sense that robots become sentient and want to kill us. There might be a Terminator-like scenario. However, I'd say, replace Skynet with some evil multinational coporation's CEO.