So, did anyone else find it slightly creepy that Blinky's right eye is slightly bigger than his left? (At it looks that way, 4:35 is a good example.) Perhaps helping to give Blinky a look of "something's not quite right" even if you can't put your finger on it?
And yeah, I sort of had to suspend my disbelief about the idea that any society with as many "robots going crazy" horror stories as ours could build a robot that is marketed to follow orders without first ensuring that there is some protection against murder. Heck, if for no other reason that I could willfully use a robot to try to murder someone else.
But, this film or rather this discussion has made me wonder about the "3 laws" thing. One person complained earlier that this film is unrealistic because you'd have to program the robot to kill which would be difficult and a waste of time. Put simply, that's not how AI works. You don't manually program a robot to be able to do every single task that you want it to do (if we're talking about something as versatile as Blinky, at least). The idea is that you program it with a knowledge base, and it uses that knowledge base things that it wasn't explicitly programmed to do (though there are many different opinions on how you'd go around doing this). Blinky wasn't programmed to murder, but it obviously knew how to use a meat slicer, and obviously knew basic human anatomy. So, yeah, it's not at all implausible that it would be able to kill, if it wasn't programmed with failsafe preventing it.
But that did raise the question, since killing isn't a specific action, necessarily, how do you program against it? Or perhaps, how do you program against it in a failsafe way, that a simple bug won't just ignore that part of the code? Even if Blinky is "three laws safe", if the three laws are prohibiting a rather abstract idea, isn't it plausible that that code is going to be a little bit fragile?