Hypothesis: Blinky doesn't actually breakdown until the commands to kill are given. Is it possible that he is programmed according to Asimov's laws, and that was the contradiction that broke him? The child repeatedly ordering him to kill, when that is not in his function, is the true cause of his breakdown, just another set of contradictions which he cannot fix?
Further hypothesis: What if he actually was following Asimov's laws in this course of action? Focusing on one particular clause, "Or, through inaction, allow harm to come to a human", is it not feasible that this learning, thinking robot observed the pain inherent to the home it was in, and decided that humanity in general would only be safe from harming themselves and each other if they were dead? As someone before stated, no matter what course of action he takes, they are going to die. However, inaction leads to a slow, inevitable death of pain, loss, agony, and bitterness, perhaps leading to the direct suffering of others. Instantaneous death, on the other hand, prevents further harm to them. obviously it's circular logic, but in the broken mind of a malfunctioning robot, it would appear an easy deduction, eh?