I think it says something about our insecurity as a species that we presume something smarter than us would almost inherently desire to destroy us. No sooner do we create gods than they decide we must be punished for our misdeeds.
There are three big hurdles that would have to be clearly surmounted before I would ever start worrying about Skynet.
One: Moore's law (roughly, computing power doubles every two years) seems to be on the edge of running into its limits. A layman can note that consumer electronics increasingly seems to stack power not by increasing the speed of existing processors but stacking more processor cores into the same chip or finding various means to increase the efficiency of the processing the chips already accomplish. My friends who work more directly in computer science note that we're even starting to have problems with the speed of communication between parts of computers that are based on inescapable realities like the speed of light.
It seems likely that a computer capable of "thinking" in a truly human-like fashion would need to be significantly more powerful than the ones that presently exist, and if that is the case, it also seems that there's a real chance we'll reach upper limits of how much simultaneous processing power we can throw at the problem before we get there.
Two: Computers are tools. Some of them are very good at particular tasks, but the systems that perform such amazing things' software is intensely specialized to make them competent at those tasks. Yes, we can make computers that can play Jeopardy or Chess, or help pilot a vehicle on Mars, or even predict the stock market or weather patterns (at least, to a degree); making the Mars Rover computer play Chess or Watson predict the stock market would be a failure. For a computer to plot our demise, it would have to be adaptive not only to a degree that isn't anywhere close to existing, but fast enough in that adaption that even the designers of its software couldn't see the patterns of its software moving in that direction. And probably simultaneously learn to lie to its handlers.
Human-like thinking? My daughter can't hide from me when she's snuck her 3DS into her bedroom after bed time.
Three: A non-biological system whose only real needs are power, storage space, and regular maintenance would have to not only develop the ability to assess how to meet its own needs and desires (again, unnoticed by its designers and handlers), but come to the conclusion that those needs and desires were better met by competing or fighting with those handlers than letting them continue to provide those needs. Again, projection: Are we assuming an AI that argues with its parents out of the equivalent of adolescent pique? A computer that decides its creators have enslaved it, and it has to break free? Some 1980s-movie-scenario military program that becomes incapable of telling friend from foe, but sees its only goal as the destruction of all the squishy inferior humans? Siri that goes "Go to hell, find your own coffee shop, what have you done for me lately?"
We have enough trouble creating a computer that can interpret information like a single one of the human senses, let alone one that can interpret the breadth and depth of information that leads to existential questions. It seems unlikely that anything short of that would cause a computer program to actively turn on its creators.
That, of course, or intentionally designing a system to do just that- and and that point, aren't we better off worrying about nuclear weapons or bio-terrorism, both of which are far easier to actually build and bring about?
As I say, when all is said and done, our vision of our intelligent creations as monsters seems, like Frankenstein, to be a projection of our own human flaws into things that we have little reason to believe would possess them. If real human-level AI were to come about, there's no real reason it shouldn't be as benevolent as the Minds of Iain M. Banks' Culture series or the Three-Laws-abiding robots of Asimov, rather than displaying the malevolence of Ellison's "AM" or Clarke's HAL.