Asimov's three laws were dumb.wordsmith said:I can immediately see one flaw in this. " A robot may not injure a human being or, through inaction, allow a human being to come to harm." So you walk into a bar, order a beer and a packet of pork scratchings (fried pork fat- perfect wing-man for the good old british pint)Adam Jenson said:i. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
ii. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
iii. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Mr Roboto comes up to you and says "I'm sorry, I can't let you do that. That beer contains alchohol, and that pork fat contains fat and salt. For your own protection I must prevent you from consuming these".
Basically, what is "allowing a human to come to harm"? If they're about to be hit by a car or mugged, fair play. If they're "doing damage" to themselves by doing everyday chores, that's not so great.
I wouldn't give robots freedom for the same reason I wouldn't give a security guard the keys and security code to my house/safe etc. Yes, it's great whilst he's on your side, but if you are doing something that he doesn't agree with, you've now got to argue with a guy who's taller, more muscular, and trained to incapacitate people.
I have not.Alex_P said:That's kinda what Daneel, Asimov's #1 robot, actually ends up doing on a grand scale. With cultural manipulation and puppet governments and shit.wordsmith said:Mr Roboto comes up to you and says "I'm sorry, I can't let you do that. That beer contains alchohol, and that pork fat contains fat and salt. For your own protection I must prevent you from consuming these".
-- Alex
So, here's the quick version:Nivag said:I have not.
And they aren't just Illusion in us? I know you can argue against, I just said it to be obnoxious. But it is likely that one day actual intelligent AI will exist, but not for quite some time.Nivag said:Aww come on people, Compliance only. They're robots. They will NEVER genuinly think for themselves and what ever way you look at it, unless we get to the point where we install actual brains into robots, they don't have emotions or feelings. Just the illusion that they do. They are just coding. They are not living things.
Though, to be fair, computer's would learn differently from humans. Attempting to recreate neural pathways in a computer is just plain stupid. It would be much easier to make a different sort of intelligence than a direct copy of a person.Alex_P said:So, here's the quick version:
The fundamental idea of machine learning is that, instead of programming instructions for doing something into a machine, you can program it with how to learn to do something. It's kinda like you're making a machine that construct its own little mental model of something and then modifies it over time. Right now these system are very domain-specific -- a program that learns how to play backgammon, a program that learns how to identify parts of speech in a sentence, a program that learns how to read messy handwriting on postal envelopes, a program that learns to identify tanks in satellite photos.
You can make a computer program that totally kicks ass at a game that you barely understand.
-- Alex
Those of us who did are obviously pleased with a merely compliant robot.Altorin said:Did you guys even read I, Robot?
the three laws don't work - They're fundamentally flawed.
If a robot follows the three laws, it will inevitably take over. They'll consider NOT taking over and babysitting humanity to be in direct conflict with the first law, and as that's the most important law, nothing any human can do can stop it.
that's the WHOLE point of I, Robot.
You're a cranky old technophobeGDW said:Those of us who did are obviously pleased with a merely compliant robot.Altorin said:Did you guys even read I, Robot?
the three laws don't work - They're fundamentally flawed.
If a robot follows the three laws, it will inevitably take over. They'll consider NOT taking over and babysitting humanity to be in direct conflict with the first law, and as that's the most important law, nothing any human can do can stop it.
that's the WHOLE point of I, Robot.
...like the ones that put cars together...
In the end, if you take humanity out of a care-giver FOR ANY FUCKING REASON then you've already taken humanitiy's best interest out of mind. Thus one of the thoughts behind the "uncanny valley" principle, wherein, a human will start to realize exactally how unhuman something is the more it tries to be. A robot that looks human is fine so long as it isn't given personality and INDEFINATELY not given free-will or any sort of truly descion-making abilities.
Call me the cranky old technophobe, here, but I believe robots will inevitably be the biggest flaw humanity will have to deal with.
Well, I'm not worried about it in the sense that if the world's going to end, it's going to end whether I worry or not, so worry not.GDW said:Well, I did say flaw, now. I'm not to cranked over the asteroid, I worry about it as much as I worry of the 2012 theory. Nor could I care less aobut robots bothering over future generations, I just don't liek the thought that out generation may be paving the way to a disater that could be nipped i nthe bud by simply NOT being jackasses.