I don't have too much to contribute to this topic (and I apologize if someone has already mentioned my suggestions), but since this is a highly philosophical and hypothetical topic, it's not in my interest to think too hard about it. I actually wrote a bit of a research paper on the topic last semester, so I've given it extensive thought already and I can vouch it's kind of painful. My best advice is read Philip Dick's Do Androids Dream of Electric Sheep? and Isaac Asimov's i, Robot and come to your own conclusions about where the line of humanity is drawn.
My personal opinion is that, if robots/androids advance to the point where they can essentially mimic human thought, behavior, and action, then that's all that really matters and they may as well be human. Consider the Turing Test (wiki it for a better explanation); have a human interact with two subjects, one of which is human and one of which is a computer, with text message communication. If the human subject is unable to differentiate between the two, then the computer is said to have passed the test.
Consider, also, if we have our friend Bill who suddenly has a neuron in his brain go bad. We replace a neuron with a microchip. He is still human to us, just as the man with a pace maker or with glasses is still human to us. A week later, another neuron breaks down and is replaced with another microchip. This process repeats itself indefinitely. If we keep replacing neurons one-by-one, he would become progressively more and more android until he eventually becomes completely artificial. If all of this occurs with no ill effect on his performance or behavior (ie, if he continues to act the same, has the same general mood, and maintains the same relations with others), then from all outward appearances, as far as it would matter to us, he would remain the same old Bill and therefore the same old human.
What it boils down to is not whether robots actually are intelligent, or if they have human emotions, or anything like that. What matters is that they can give the image of having those qualities by properly acting and reacting as if they do. And it's really not unreasonable for a computer to eventually be able to emulate these qualities. Therefore I could easily see a case being made in their favor for "human rights" somewhere down the road, unless humanity makes a firm effort in their design and programming to keep them down.