They're kidding. (At least, the people in this thread are.) Don't take it seriously.Happyninja42 said:God I get so tired of these Borg, Cyberdine "The Robots are going to kill us!" articles every time there is a scientific advancement.
Yes, let's taint every possible scientific achievement and breakthrough with images of fear and destruction from pop culture. Please continue to do this, I'm sure it's totally helpful when it comes to public opinion about the validity of these projects. *dribbles sarcasm from every fucking pore*
I was wondering about that. I can't programme but I know random first year computer science bachelor students who have told me they know how to use similar (though probably more basic) techniques. Its still interesting but as per usual with scientific achievements a lot of the legwork was already there. I do think it is still impressive that a robot can observe facts about the world and modify it's behaviour accordingly. In any case, though it may not be new to you I don't really expect popular media to keep all that up to date with the cutting edge of science. We have scientists for that. This is aimed at interested outsiders who like to know the broad direction of where research is going and this is a good exemple.Areloch said:This really isn't new.
I mean, the APPLICATION of the existing methodology is new, but the root idea is not. We've been utilizing genetic learning methods for robots and AI for a long time now.
The only thing that makes this special is that it's applying it to something it's put together as opposed to itself.
With sufficient power for machines there's no telling what would happen. Our destruction is one possibility - it could simply be negligence. Animal species go extinct every day due to human actions. Animals that get in our way become roadkill, unless they are fuzzy and cute enough in which case sometimes they are spared, or become our "pets".inu-kun said:It's always fun to say it but besides using the world "evolve" it's not much to go start the robot apocalypse. Why would machines destroy mankind anyways?
it wouldn't "want" to but it very well couldinu-kun said:It's always fun to say it but besides using the world "evolve" it's not much to go start the robot apocalypse. Why would machines destroy mankind anyways?
I agree...although I think there does need to be a non-alarmist/well informed consideration over the potential risks, because I mean even Stephen Hawking is kind of concernedHappyninja42 said:God I get so tired of these Borg, Cyberdine "The Robots are going to kill us!" articles every time there is a scientific advancement.
Yes, let's taint every possible scientific achievement and breakthrough with images of fear and destruction from pop culture. Please continue to do this, I'm sure it's totally helpful when it comes to public opinion about the validity of these projects. *dribbles sarcasm from every fucking pore*
erm...I'm pretty sure thats not the intention, I mean I'm not an expert on how scientific research works but I would have thought it would be either profit/attempting to solves a problem or...well maybe scientists do just "SCIENCE" for the sake of science but I don't knowbriankoontz said:It's extremely dangerous. But we know our days are numbered, so we're establishing who will succeed humans in terms of an advanced civilization.
I'm well aware they are kidding, but it's still annoying. Public opinion on things is colored by the public discussion, and when the only headlines you see about these advances is "Scientists make Monkey Borg" or "Scientists create Cyberdine, we're all doomed" over and over and over, it does color the public view of things. Humans behave very stupidly about a lot of things, and whenever some new advancement is presented to the public in language that only describes how it is going to be a danger and hazard to humanity, it paints a negative view of the research. I mean hell, there are people who genuinely think that people like Steve Jobs and Bill Gates are actually working for the forces of evil, and are heralding the end of the world because of their evil machines. Which is just stupid, but that's how humanity can behave.FalloutJack said:They're kidding. (At least, the people in this thread are.) Don't take it seriously.Happyninja42 said:God I get so tired of these Borg, Cyberdine "The Robots are going to kill us!" articles every time there is a scientific advancement.
Yes, let's taint every possible scientific achievement and breakthrough with images of fear and destruction from pop culture. Please continue to do this, I'm sure it's totally helpful when it comes to public opinion about the validity of these projects. *dribbles sarcasm from every fucking pore*
Anyway... I'll be waiting for them to do something unexpected and weird before calling this Turing Tested And Approved.
Oh, it's definitely cool.Pseudonym said:I was wondering about that. I can't programme but I know random first year computer science bachelor students who have told me they know how to use similar (though probably more basic) techniques. Its still interesting but as per usual with scientific achievements a lot of the legwork was already there. I do think it is still impressive that a robot can observe facts about the world and modify it's behaviour accordingly. In any case, though it may not be new to you I don't really expect popular media to keep all that up to date with the cutting edge of science. We have scientists for that. This is aimed at interested outsiders who like to know the broad direction of where research is going and this is a good exemple.Areloch said:This really isn't new.
I mean, the APPLICATION of the existing methodology is new, but the root idea is not. We've been utilizing genetic learning methods for robots and AI for a long time now.
The only thing that makes this special is that it's applying it to something it's put together as opposed to itself.