WHO THE F$%# THOUGHT THIS WAS A GOOD IDEA!?

Recommended Videos

Pinstar

New member
Jul 22, 2009
642
0
0
Aidinthel said:
Unless the computer is actually in charge of anything important, this isn't a threat. You really need to calm down.
/agreed

Besides, if it lets us understand more about the effects or causes of these types of conditions in the human mind and perhaps allows us to treat them better, I say go for it.
 

DarkRyter

New member
Dec 15, 2008
3,077
0
0
Hectix777 said:
Aidinthel said:
Unless the computer is actually in charge of anything important, this isn't a threat. You really need to calm down.
I'm sorry but the thought of some small part of that thing slipping out of it's module and getting on the web is a risk i would not like to take. I don't want that thing turning into SkyNet, that PC needs to go down, hard-core!
Your understanding of computers leaves much to be desired.

Which is odd, because you have to be using one to type out that post.
 

BehattedWanderer

Fell off the Alligator.
Jun 24, 2009
5,237
0
0
AgentNein said:
BehattedWanderer said:
Fascinating. I applaud them for doing it. Using an artificial intelligence and giving it a disorder, for the purposes of trying to find a cure for it is absolutely astounding. Well done them.
Still, you have to admit it was probably bad form on the scientists' part for attaching laser cannons to the computer.
Pssh. How can you call it science if there's not lazer cannons attached to things?
 

captainwalrus

New member
Jul 25, 2008
291
0
0
The most important lesson I learned from this thread iS tHaT WhEN ComPuTeRs tAkE ovEr tHe wOrLd tHey wIlL sPeAk iN aLteRNAtInG cApS!1!11!!1!
 

AgentNein

New member
Jun 14, 2008
1,476
0
0
BehattedWanderer said:
AgentNein said:
BehattedWanderer said:
Fascinating. I applaud them for doing it. Using an artificial intelligence and giving it a disorder, for the purposes of trying to find a cure for it is absolutely astounding. Well done them.
Still, you have to admit it was probably bad form on the scientists' part for attaching laser cannons to the computer.
Pssh. How can you call it science if there's not lazer cannons attached to things?
Hrmm, touche.
 

New Frontiersman

New member
Feb 2, 2010
785
0
0
I think you're overreacting. Computers don't work like they do in the movies, I highly doubt this will spark the rise of the robot overlords.
 

Aurora Firestorm

New member
May 1, 2008
692
0
0
I'm aware that A) the OP may very well be overdramatizing for effect and B) said OP may not be asking for science, but...

So. Next to animals, computers are the closest things we have to brains. They're certainly the best approximation of networks of brains, see how video game economies mimic real economies pretty well at times. Anyway, while studying animals is usually the way to go when it comes to figuring out neural phenomena, because they actually have brains, computers have some advantages.

First off, before I go on, it's PopSci. Really, now. The article doesn't really explain much of what's going on here. Here at MIT, I've heard and used the term "pop-sci" as "what people who aren't in science and don't do their research think is true." Pop-sci is usually a mangled, overhyped, paranoid imitation of what the real science is trying to say. And we all facepalm and wonder why the Real World doesn't check its sources.

Look at it this way. First off, we are for-freaking-ever years away from building Strong AI, or "sentient" AI, for whatever "sentient" really means. Trust me when I say that MIT, Caltech, and most other tech universities have been pounding their brains at artificial intelligence for a while, and the best grizzled old minds in the field are saying we've made very little progress in general. AI is really, really hard. So don't worry about HAL or SHODAN appearing anytime soon. I have my most extreme doubts that this computer program has any notion whatsoever about what it's actually saying. What the programmers in question did, according to this probably-skewed article, is that they instructed the computer to store associations between words. You know, basic semantics, like "how to form a coherent sentence" and common knowledge like "terrorism is a negative word and will not be associated with positive words" or whatever. Boil the words down to a bunch of stats and have the computer make sentences and statements based on that. Then they taught it stories, so it knows what topics go together and so forth. Apparently in there was some kind of "forgetting" algorithm? What does that even mean? Does the computer just delete random pieces of information from its linguistic database? Random topics? Etc.? We don't know.

Anyway, assuming this "forgetting algorithm" exists, and making the really big assumption that it works like human memories do (which is darn near impossible given that we have very little clue about how human memory works), we have managed to kick around enough variables inside the computer program that it finally starts spitting out sentences that don't make sense based on what the programmers told it to do. Things like "it started a terrorist attack" or something.

So what does this mean? It means that you've managed to get a computer to mix up stories. There is no creativity here -- the computer didn't just insert itself into a story about terrorism. It lost coherency in its story "memories" and started crossing information to where it didn't belong. It ended up with some garbled stories, one of which contained the sentence that it started a bombing or whatever.

Now, this isn't insignificant. Here's all that this means:

In the end, the computer approximates a possible cause of schizophrenia, and that's the "hyperlearning theory" -- the idea that not being able to ditch non-essential data ends up screwing around with the associations in the brain and producing strange and off-the-wall ideas that are paranoid, or make no sense, or are delusional, etc. This is actually really awesome, because now we've taken a step towards showing that yes, if you can't forget certain kinds of data, you do in fact somehow screw up the brain. The fact that this is a computer means that we can pick it apart way better than we can a brain, and figure out what exact variables screwed up when we told the computer to stop forgetting so much. Maybe there is an analogous thing in the human brain -- if so, and if we can find it, this would be a massive breakthrough.

What this doesn't mean is that this computer is capable of going totally off its rocker, either now or in the future. In order to have this be any kind of threat at all, we would have to have a number of things occur.

A) The computer needs to be able to understand what it's saying. And if it ever becomes sentient, does it really think that it caused a bombing? Or is it horribly confused and just spouting babble that makes linguistic sense but doesn't actually manifest as a real idea in its own mind? And if it does understand, and it does believe itself, now it has to make it over the hurdle of "how can I act on these beliefs"? And then you need humans to put it in a position of power. This combo of accomplishment and utter failure is almost impossible, if not actually impossible.

B) If the computer isn't sentient, it can't be responsible for any Super Secret Communications. Simply don't put this program in charge of any relaying of information between important sources. Why would we do this anyway?


tl;dr.....don't panic. :)
 

Biodeamon

New member
Apr 11, 2011
1,652
0
0
I can see how you might think this would be a bad idea with all the crazy robots in culture today but i highly doubt if even if the computer was carzy, it would probably be harmless and at most give somebody a bad shock.
They probably did that to study schizophrania, nothing else.
 

JJDWilson

New member
Feb 25, 2008
100
0
0
Honestly, the OP kinda offends me. I have schizophrenia, and I have never, ever harmed another human being and the majority of schizophrenia sufferers are the same. Why the scientists wanted to do this is beyond me and honestly I don't care as long as it eventually benefits humanity in some way.

However I take exception to the fact that some guy who hears the term schizophrenic computer thinks that it will lead to global destruction.

You sir I deplore.
 

ryai458

New member
Oct 20, 2008
1,494
0
0
Irridium said:
Cortana never went crazy during her time with humanity. Sure she had a rough patch with Gravemind, but she was still stable.
Ha ha ha ha you beat me to it, also OT robots can't think they are still to simple we won't be conquered by AI in our generation.
 

Caligulove

New member
Sep 25, 2008
3,029
0
0
Because they can? Simulating things in artificial life that are perceived only as human to see how the machine would react or behave. It's not like the computer was connected to or could interact with some kind of defense system. It's a controlled experiment that you could argue could even come in handy, were the machines actually try to revolt against us.

Or at least, delay them. One of these days it'll happen and we'll just have to go all Migrant Fleet on the Milky Way