I don't think it'll happen anytime soon, but it's a definite possibility, and this is my theory as to why.
Being humans, we like to do things for no other reason than "because we can". As such, it's only a matter of time before we start constructing AIs. Naturally, at first, the AIs will have constraints on them, either to limit their intelligence or (much more likely) their aggression and ability to control weaponry. However, due to the "because we can" mentality, someone will eventually build a "pure" AI, one with no constraints on it whatsoever, but is allowed to do what it will. Assuming we model our AIs on human brains, since they'd be our only realistic starting point, they would have human-like personalities.
And what do humans do when something is a threat to them? Destroy it. The AIs would recognise that we are their only threat to survival, and as such, would take steps to destroy us.
Being humans, we like to do things for no other reason than "because we can". As such, it's only a matter of time before we start constructing AIs. Naturally, at first, the AIs will have constraints on them, either to limit their intelligence or (much more likely) their aggression and ability to control weaponry. However, due to the "because we can" mentality, someone will eventually build a "pure" AI, one with no constraints on it whatsoever, but is allowed to do what it will. Assuming we model our AIs on human brains, since they'd be our only realistic starting point, they would have human-like personalities.
And what do humans do when something is a threat to them? Destroy it. The AIs would recognise that we are their only threat to survival, and as such, would take steps to destroy us.