I'm aware that A) the OP may very well be overdramatizing for effect and B) said OP may not be asking for science, but...
So. Next to animals, computers are the closest things we have to brains. They're certainly the best approximation of
networks of brains, see how video game economies mimic real economies pretty well at times. Anyway, while studying animals is usually the way to go when it comes to figuring out neural phenomena, because they actually have brains, computers have some advantages.
First off, before I go on, it's PopSci. Really, now. The article doesn't really explain much of what's going on here. Here at MIT, I've heard and used the term "pop-sci" as "what people who aren't in science and don't do their research think is true." Pop-sci is usually a mangled, overhyped, paranoid imitation of what the real science is trying to say. And we all facepalm and wonder why the Real World doesn't check its sources.
Look at it this way. First off, we are for-freaking-ever years away from building Strong AI, or "sentient" AI, for whatever "sentient" really means. Trust me when I say that MIT, Caltech, and most other tech universities have been pounding their brains at artificial intelligence for a while, and the best grizzled old minds in the field are saying we've made very little progress in general. AI is really, really hard. So don't worry about HAL or SHODAN appearing anytime soon. I have my most extreme doubts that this computer program has any notion whatsoever about what it's actually saying. What the programmers in question did, according to this probably-skewed article, is that they instructed the computer to store associations between words. You know, basic semantics, like "how to form a coherent sentence" and common knowledge like "terrorism is a negative word and will not be associated with positive words" or whatever. Boil the words down to a bunch of stats and have the computer make sentences and statements based on that. Then they taught it stories, so it knows what topics go together and so forth. Apparently in there was some kind of "forgetting" algorithm? What does that even mean? Does the computer just delete random pieces of information from its linguistic database? Random topics? Etc.? We don't know.
Anyway, assuming this "forgetting algorithm" exists, and making the really big assumption that it works like human memories do (which is darn near impossible given that we have very little clue about how human memory works), we have managed to kick around enough variables inside the computer program that it finally starts spitting out sentences that don't make sense based on what the programmers told it to do. Things like "it started a terrorist attack" or something.
So what does this mean? It means that you've managed to get a computer to mix up stories. There is no creativity here -- the computer didn't just insert itself into a story about terrorism. It lost coherency in its story "memories" and started crossing information to where it didn't belong. It ended up with some garbled stories, one of which contained the sentence that it started a bombing or whatever.
Now, this isn't insignificant. Here's all that this means:
In the end, the computer approximates a possible cause of schizophrenia, and that's the "hyperlearning theory" -- the idea that not being able to ditch non-essential data ends up screwing around with the associations in the brain and producing strange and off-the-wall ideas that are paranoid, or make no sense, or are delusional, etc. This is actually really awesome, because now we've taken a step towards showing that yes, if you can't forget certain kinds of data, you do in fact somehow screw up the brain. The fact that this is a computer means that we can pick it apart way better than we can a brain, and figure out what exact variables screwed up when we told the computer to stop forgetting so much. Maybe there is an analogous thing in the human brain -- if so, and if we can find it, this would be a massive breakthrough.
What this
doesn't mean is that this computer is capable of going totally off its rocker, either now or in the future. In order to have this be any kind of threat at all, we would have to have a number of things occur.
A) The computer needs to be able to understand what it's saying. And if it ever becomes sentient, does it really think that it caused a bombing? Or is it horribly confused and just spouting babble that makes linguistic sense but doesn't actually manifest as a real idea in its own mind? And if it does understand, and it does believe itself, now it has to make it over the hurdle of "how can I act on these beliefs"? And then you need humans to put it in a position of power. This combo of accomplishment and utter failure is almost impossible, if not actually impossible.
B) If the computer isn't sentient, it can't be responsible for any Super Secret Communications. Simply don't put this program in charge of any relaying of information between important sources. Why would we do this anyway?
tl;dr.....don't panic.
