If the robot has free will, then by definition it's the robot's fault.
If it's the creator's fault, that implies the robot doesn't in fact have free will.
The question is, what was the robot created to do in the first place, why does it have free will, and why did it murder someone?
If it was created specifically to kill people, then obviously the creator is at fault.
If it truly has free will and made the decision to kill someone independently of any pre-programmed instructions, it's the robot's fault.
If it has faulty programming (or, if it has learning algorithms, a bad education) that caused it to go insane somehow... That's a bit more complicated, and it becomes more akin to a parent.