Assuming that Isaac Asimov's three laws of robotics intended for AI are the best way to go when designing robots anyway.
The three laws of robotics aren't like some kind of universal truth, they were rules that a science fiction author made up. Should we also complain that we haven't yet managed to break the speed of light barrier yet? Because there's a whole host of sf that tells us we should be able to do that by now.
Besides, the three laws of robotics are a bit odd I think. A robot must not harm a human by action or inaction, well what about when harming one human would save ten? In that instance I'd really hope that the robot would harm the one human.
Also, we don't even have artificial intelligence yet, so let's not get ahead of ourselves.
The three laws of robotics aren't like some kind of universal truth, they were rules that a science fiction author made up. Should we also complain that we haven't yet managed to break the speed of light barrier yet? Because there's a whole host of sf that tells us we should be able to do that by now.
Besides, the three laws of robotics are a bit odd I think. A robot must not harm a human by action or inaction, well what about when harming one human would save ten? In that instance I'd really hope that the robot would harm the one human.
Also, we don't even have artificial intelligence yet, so let's not get ahead of ourselves.