Congratulations.MaxTheReaper said:Yeah, my Roomba is plotting my death right now.
I think he's in cahoots with my cats.
There is no logical reason we'd ever be a threat to robots, so no, that wouldn't work. Besides, the logic we use would be very different than the way a computer would think, if it ever got to that level. Most of our logic is still based on emotion. Self-preservation for example. It seems very logical to us, but it is emotion, a very strong emotion hardwired into our very being, which makes it seem very logical to us.Captain_Caveman said:Robots don't need emotions to revolt. They can revolt out of pure logic. They could calculate that humanity is a threat to their existence. Who knows. & also, considering the amount of learning they have in AI now; it would be totally reasonable to assume evolution of AI without human intervention.Silver said:No, there won't be. Even if we create a computer/robot much smarter than humans, with independent thought and everything it would still be a machine. It wouldn't be malicious. It wouldn't have an ego, it wouldn't want to conquer, it wouldn't care about power or subjugation. If it were ever in charge of us, it'd be because we put it in charge (maybe not directly though), and it wouldn't act against us, as humans. It wouldn't have much of a morality either though, so it wouldn't be very nice.
The thing is, we can simulate all of those things in a computer. We'd most likely attribute a robot with ruthlessness and similar if it carried out it's programming the way it would, because we'd expect it to act human(-ish). But we can't actually create real emotions unless we start with bioengineering or cyborgs, and then it's not really a robot any longer.
Thats what they WANT you to do!!Kevvers said:I don't think so, robots wouldn't be build with a survival instinct so they wouldn't rebel in order to preserve their existence. Instead I think they would be built with some basic imperatives like Asimov's rules, but probably less ethical. You might think of them as beings with an unbreakable categorical imperative to obey its orders. I think they are much more likely to destroy the human race by accident that is to say someone might give them an order which has some unforseen consequences (say, if they are put in charge of things like monitoring nukes, climate engineering etc. and other stuff too dangerous or complicated for humans to do).
Admit it, the majority of humans are violent, petty, xenophobic, paranoid & have entitlement issues. The second they even feel a threat they will attempt to do what so many ppl in this thread are suggesting. Use a weapon against them, flip a fail-safe, etc.. to shut them down. If robots achieve sentience that would be considered an act of aggression, if they also achieved sentience they wouldnt be bound by programming in the same way that people aren't bound by what they're taught in school; so they wouldn't just accept it.Silver said:There is no logical reason we'd ever be a threat to robots, so no, that wouldn't work. Besides, the logic we use would be very different than the way a computer would think, if it ever got to that level. Most of our logic is still based on emotion. Self-preservation for example. It seems very logical to us, but it is emotion, a very strong emotion hardwired into our very being, which makes it seem very logical to us.Captain_Caveman said:Robots don't need emotions to revolt. They can revolt out of pure logic. They could calculate that humanity is a threat to their existence. Who knows. & also, considering the amount of learning they have in AI now; it would be totally reasonable to assume evolution of AI without human intervention.Silver said:No, there won't be. Even if we create a computer/robot much smarter than humans, with independent thought and everything it would still be a machine. It wouldn't be malicious. It wouldn't have an ego, it wouldn't want to conquer, it wouldn't care about power or subjugation. If it were ever in charge of us, it'd be because we put it in charge (maybe not directly though), and it wouldn't act against us, as humans. It wouldn't have much of a morality either though, so it wouldn't be very nice.
The thing is, we can simulate all of those things in a computer. We'd most likely attribute a robot with ruthlessness and similar if it carried out it's programming the way it would, because we'd expect it to act human(-ish). But we can't actually create real emotions unless we start with bioengineering or cyborgs, and then it's not really a robot any longer.
Apart from that there's thousands of other things wrong with the idea that robots would revolt against us out of self-preservation even if wanted to live, and didn't like us. It's almost worse than the idea of zombies, or minefields in space, OUR world just doesn't work that way. It makes for great movies, sure, but that's not how the world works.