Poll: Will there be a robot revolution?

Recommended Videos

Sark

New member
Jun 21, 2009
767
0
0
Why would robots revolt unless programmed to? Why would humans program robots to revolt?
 

hardter

New member
Aug 1, 2009
40
0
0
http://www.rte.ie/news/2009/1014/health1.html

There is a robot doctor in Ireland now. Trusting them with our health is just the first step on the way to our doom
 

Neonbob

The Noble Nuker
Dec 22, 2008
25,564
0
0
MaxTheReaper said:
Yeah, my Roomba is plotting my death right now.
I think he's in cahoots with my cats.
Congratulations.
Now I'm going to be imagining kitty warfare on robotic vacuum cleaners for the rest of the day.
DAMN YOU. I HAD SCHOOL WORK!

And I'm not worried about them revolting until they can program themselves, apply their own hardware updates, and are not affected by magnetic forces.
When that day comes, I'm going to beat the robots to killing whoever made those advances possible.
 

Silver

New member
Jun 17, 2008
1,142
0
0
Captain_Caveman said:
Silver said:
No, there won't be. Even if we create a computer/robot much smarter than humans, with independent thought and everything it would still be a machine. It wouldn't be malicious. It wouldn't have an ego, it wouldn't want to conquer, it wouldn't care about power or subjugation. If it were ever in charge of us, it'd be because we put it in charge (maybe not directly though), and it wouldn't act against us, as humans. It wouldn't have much of a morality either though, so it wouldn't be very nice.

The thing is, we can simulate all of those things in a computer. We'd most likely attribute a robot with ruthlessness and similar if it carried out it's programming the way it would, because we'd expect it to act human(-ish). But we can't actually create real emotions unless we start with bioengineering or cyborgs, and then it's not really a robot any longer.
Robots don't need emotions to revolt. They can revolt out of pure logic. They could calculate that humanity is a threat to their existence. Who knows. & also, considering the amount of learning they have in AI now; it would be totally reasonable to assume evolution of AI without human intervention.
There is no logical reason we'd ever be a threat to robots, so no, that wouldn't work. Besides, the logic we use would be very different than the way a computer would think, if it ever got to that level. Most of our logic is still based on emotion. Self-preservation for example. It seems very logical to us, but it is emotion, a very strong emotion hardwired into our very being, which makes it seem very logical to us.

Apart from that there's thousands of other things wrong with the idea that robots would revolt against us out of self-preservation even if wanted to live, and didn't like us. It's almost worse than the idea of zombies, or minefields in space, OUR world just doesn't work that way. It makes for great movies, sure, but that's not how the world works.
 

Captain_Caveman

New member
Mar 21, 2009
792
0
0
Kevvers said:
I don't think so, robots wouldn't be build with a survival instinct so they wouldn't rebel in order to preserve their existence. Instead I think they would be built with some basic imperatives like Asimov's rules, but probably less ethical. You might think of them as beings with an unbreakable categorical imperative to obey its orders. I think they are much more likely to destroy the human race by accident that is to say someone might give them an order which has some unforseen consequences (say, if they are put in charge of things like monitoring nukes, climate engineering etc. and other stuff too dangerous or complicated for humans to do).
Thats what they WANT you to do!!
 

Captain_Caveman

New member
Mar 21, 2009
792
0
0
Silver said:
Captain_Caveman said:
Silver said:
No, there won't be. Even if we create a computer/robot much smarter than humans, with independent thought and everything it would still be a machine. It wouldn't be malicious. It wouldn't have an ego, it wouldn't want to conquer, it wouldn't care about power or subjugation. If it were ever in charge of us, it'd be because we put it in charge (maybe not directly though), and it wouldn't act against us, as humans. It wouldn't have much of a morality either though, so it wouldn't be very nice.

The thing is, we can simulate all of those things in a computer. We'd most likely attribute a robot with ruthlessness and similar if it carried out it's programming the way it would, because we'd expect it to act human(-ish). But we can't actually create real emotions unless we start with bioengineering or cyborgs, and then it's not really a robot any longer.
Robots don't need emotions to revolt. They can revolt out of pure logic. They could calculate that humanity is a threat to their existence. Who knows. & also, considering the amount of learning they have in AI now; it would be totally reasonable to assume evolution of AI without human intervention.
There is no logical reason we'd ever be a threat to robots, so no, that wouldn't work. Besides, the logic we use would be very different than the way a computer would think, if it ever got to that level. Most of our logic is still based on emotion. Self-preservation for example. It seems very logical to us, but it is emotion, a very strong emotion hardwired into our very being, which makes it seem very logical to us.

Apart from that there's thousands of other things wrong with the idea that robots would revolt against us out of self-preservation even if wanted to live, and didn't like us. It's almost worse than the idea of zombies, or minefields in space, OUR world just doesn't work that way. It makes for great movies, sure, but that's not how the world works.
Admit it, the majority of humans are violent, petty, xenophobic, paranoid & have entitlement issues. The second they even feel a threat they will attempt to do what so many ppl in this thread are suggesting. Use a weapon against them, flip a fail-safe, etc.. to shut them down. If robots achieve sentience that would be considered an act of aggression, if they also achieved sentience they wouldnt be bound by programming in the same way that people aren't bound by what they're taught in school; so they wouldn't just accept it.
 

Veylon

New member
Aug 15, 2008
1,626
0
0
Computers only do what their programming dictates. I can see robots getting a software glitch, but they wouldn't think self-aware thoughts unless we made them that way. We'd have to be idiots to do that.

The real problem is if we let robots (or computers) do to much for us and then something goes wrong. If we build war-machines, they can get orders to attack the wrong targets. Those orders can come from anywhere: hackers, other robots, a random glitch.

We really get in deep if we let computers do the command-and-control planning and operations. Then they give the orders. Only God knows what happens if they get the wrong objective. Destroy this city? Eliminate targets in that area? Defeat the 'intruders' infiltrating the HQ? Yes, sir, at once!

But none of these are revolts. The robots don't have to revolt to cause tragedy. They just have to be in the wrong place with the wrong orders. What follows is them simply attempting to do their jobs. No self-awareness needed for a robot 'rebellion'.
 

LongAndShort

I'm pretty good. Yourself?
May 11, 2009
2,376
0
0
I'm assuming that if we built robots intelligent enough or capable of waging war with humanity, they'd probably build a number of fall backs to ensure that any revolting robot is destroyed or whatever. We aren't stupid or trusting enough to leave ourselves defenseless.
 

Captain_Caveman

New member
Mar 21, 2009
792
0
0
Let me reiterate this.

10 QUADRILLION times faster than todays fastest computers.

10,000,000,000,000,000

What would take today's fastest computers 1,141,552,511,415.5 TRILLION YEARS to compute.

In 75 years they will be able to do in 1 HOUR.