Let us Welcome our New Skynet Overlords

Recommended Videos

0hnoes

The Magic Man!
May 18, 2015
18
0
0
This is how it starts!

http://www.bbc.com/news/science-environment-33867941
 

Dalek Caan

Pro-Dalek, Anti-You
Feb 12, 2011
2,871
0
0
Huh, I always thought Keen Software would be the ones to bring forth the Robo apocalypse.
 

happyninja42

Elite Member
Legacy
May 13, 2010
8,577
2,990
118
God I get so tired of these Borg, Cyberdine "The Robots are going to kill us!" articles every time there is a scientific advancement.

Yes, let's taint every possible scientific achievement and breakthrough with images of fear and destruction from pop culture. Please continue to do this, I'm sure it's totally helpful when it comes to public opinion about the validity of these projects. *dribbles sarcasm from every fucking pore*
 

FalloutJack

Bah weep grah nah neep ninny bom
Nov 20, 2008
15,489
0
0
Happyninja42 said:
God I get so tired of these Borg, Cyberdine "The Robots are going to kill us!" articles every time there is a scientific advancement.

Yes, let's taint every possible scientific achievement and breakthrough with images of fear and destruction from pop culture. Please continue to do this, I'm sure it's totally helpful when it comes to public opinion about the validity of these projects. *dribbles sarcasm from every fucking pore*
They're kidding. (At least, the people in this thread are.) Don't take it seriously.

Anyway... I'll be waiting for them to do something unexpected and weird before calling this Turing Tested And Approved.
 

KyuubiNoKitsune-Hime

Lolita Style, The Best Style!
Jan 12, 2010
2,151
0
0
If anything this looks like the birth of technology that could lead to constantly improving androids, perhaps to something of the level of reploids, ala MegaMan X. The question is if they can improve themselves and each other... They might then start looking into ways of improving humans too. I kinda hope they develop a sense of morality by the time that starts happening.
 

Areloch

It's that one guy
Dec 10, 2012
623
0
0
This really isn't new.

I mean, the APPLICATION of the existing methodology is new, but the root idea is not. We've been utilizing genetic learning methods for robots and AI for a long time now.

The only thing that makes this special is that it's applying it to something it's put together as opposed to itself.
 

Pseudonym

Regular Member
Legacy
Feb 26, 2014
802
8
13
Country
Nederland
Areloch said:
This really isn't new.

I mean, the APPLICATION of the existing methodology is new, but the root idea is not. We've been utilizing genetic learning methods for robots and AI for a long time now.

The only thing that makes this special is that it's applying it to something it's put together as opposed to itself.
I was wondering about that. I can't programme but I know random first year computer science bachelor students who have told me they know how to use similar (though probably more basic) techniques. Its still interesting but as per usual with scientific achievements a lot of the legwork was already there. I do think it is still impressive that a robot can observe facts about the world and modify it's behaviour accordingly. In any case, though it may not be new to you I don't really expect popular media to keep all that up to date with the cutting edge of science. We have scientists for that. This is aimed at interested outsiders who like to know the broad direction of where research is going and this is a good exemple.
 

briankoontz

New member
May 17, 2010
656
0
0
inu-kun said:
It's always fun to say it but besides using the world "evolve" it's not much to go start the robot apocalypse. Why would machines destroy mankind anyways?
With sufficient power for machines there's no telling what would happen. Our destruction is one possibility - it could simply be negligence. Animal species go extinct every day due to human actions. Animals that get in our way become roadkill, unless they are fuzzy and cute enough in which case sometimes they are spared, or become our "pets".

We define intelligence in various ways, but one way is the degree to which free will is exerted. So the very process of creating "artificial intelligence" requires that the machine exert free will, which is a creative process with an unpredictable outcome, and requires that the machine have a certain amount of power comparable to a human being.

The only real predator of humans is other humans. We ensure this is the case by (mostly successfully) dominating and controlling all other species of animal. But the very attempt to create artificial intelligence requires that we subvert our own fear and desire to dominate in order for the machine to exert actual intelligence.

It's extremely dangerous. But we know our days are numbered, so we're establishing who will succeed humans in terms of an advanced civilization. As the saying goes, the man who has nothing has nothing left to lose. So a "worse case scenario" of human annihilation at the hands of machines will merely speed up our own extinction, otherwise resulting from the dying earth.

The long-term purpose of humanity is no longer to sustain and build upon itself, but to build and enable it's successor. The point at which this became true marked the transition from the human to the post-human age. Humanity does need to avoid being annihilated by machines until it believes that the machines will continue on past human existence.
 

Vault101

I'm in your mind fuzz
Sep 26, 2010
18,863
15
43
inu-kun said:
It's always fun to say it but besides using the world "evolve" it's not much to go start the robot apocalypse. Why would machines destroy mankind anyways?
it wouldn't "want" to but it very well could

basically yeah it IS ridiculous to project human "desires" onto a robot, a robot might not feel the need to fight for its "rights" what would it do with "rights"? (what are rights?) why would it want to enslave humans? a bunch of inefficient meatbags? that need inefficient things like "down time" and "happyness?" *blegh*

but lets say you have an AI that's prime directive is to make people "happy"...so it takes the view of certain philosophies that non-existance is preferable to existence so it devises a way to obliterate the all life on the planet in a total instantaneous (non painful) way......yay?

all those potential unforeseen consequences are what make AI so potentially dangerous, you've got something that surpasses human intelligence and is hypothetically more alien than an actual alien (if you assume an alien is an organic life form and therefore operates under the survive->reproduce framework)

that's not to say that you COULDN'T make a benevolent AI but the thing is you don't know WHAT it's gonna do...I mean do humans take into account anthills when building a skyscraper? no

Happyninja42 said:
God I get so tired of these Borg, Cyberdine "The Robots are going to kill us!" articles every time there is a scientific advancement.

Yes, let's taint every possible scientific achievement and breakthrough with images of fear and destruction from pop culture. Please continue to do this, I'm sure it's totally helpful when it comes to public opinion about the validity of these projects. *dribbles sarcasm from every fucking pore*
I agree...although I think there does need to be a non-alarmist/well informed consideration over the potential risks, because I mean even Stephen Hawking is kind of concerned

briankoontz said:
It's extremely dangerous. But we know our days are numbered, so we're establishing who will succeed humans in terms of an advanced civilization.
erm...I'm pretty sure thats not the intention, I mean I'm not an expert on how scientific research works but I would have thought it would be either profit/attempting to solves a problem or...well maybe scientists do just "SCIENCE" for the sake of science but I don't know
 

happyninja42

Elite Member
Legacy
May 13, 2010
8,577
2,990
118
FalloutJack said:
Happyninja42 said:
God I get so tired of these Borg, Cyberdine "The Robots are going to kill us!" articles every time there is a scientific advancement.

Yes, let's taint every possible scientific achievement and breakthrough with images of fear and destruction from pop culture. Please continue to do this, I'm sure it's totally helpful when it comes to public opinion about the validity of these projects. *dribbles sarcasm from every fucking pore*
They're kidding. (At least, the people in this thread are.) Don't take it seriously.

Anyway... I'll be waiting for them to do something unexpected and weird before calling this Turing Tested And Approved.
I'm well aware they are kidding, but it's still annoying. Public opinion on things is colored by the public discussion, and when the only headlines you see about these advances is "Scientists make Monkey Borg" or "Scientists create Cyberdine, we're all doomed" over and over and over, it does color the public view of things. Humans behave very stupidly about a lot of things, and whenever some new advancement is presented to the public in language that only describes how it is going to be a danger and hazard to humanity, it paints a negative view of the research. I mean hell, there are people who genuinely think that people like Steve Jobs and Bill Gates are actually working for the forces of evil, and are heralding the end of the world because of their evil machines. Which is just stupid, but that's how humanity can behave.

So yeah, I know it's a joke here, but it's symptomatic of a bigger issue when it comes to scientific advancement, and I'm frankly tired of it. Just look at movies, how often is the badguy some scientist who "Dared to tread in God's domain!" and makes some new thing, that of course goes evil, for no reason at all, and threatens humanity. It's one of the most cliche movie tropes their is. The evil scientist, defying convention and making something amazing, but ultimately evil in nature. And I've heard many people, describe various advancements, using movie language to describe it, and their not being optimistic about it. It's hurting advancement and I don't like it, whether they're joking or not. Because these are the same people who will then go to the polls, and vote on important policies, based on information colored by pop culture, that is frequently so inaccurate it's laughable.
 

Areloch

It's that one guy
Dec 10, 2012
623
0
0
Pseudonym said:
Areloch said:
This really isn't new.

I mean, the APPLICATION of the existing methodology is new, but the root idea is not. We've been utilizing genetic learning methods for robots and AI for a long time now.

The only thing that makes this special is that it's applying it to something it's put together as opposed to itself.
I was wondering about that. I can't programme but I know random first year computer science bachelor students who have told me they know how to use similar (though probably more basic) techniques. Its still interesting but as per usual with scientific achievements a lot of the legwork was already there. I do think it is still impressive that a robot can observe facts about the world and modify it's behaviour accordingly. In any case, though it may not be new to you I don't really expect popular media to keep all that up to date with the cutting edge of science. We have scientists for that. This is aimed at interested outsiders who like to know the broad direction of where research is going and this is a good exemple.
Oh, it's definitely cool.

But the articles and commentary tend to treat stuff like this as absolutely new and cutting edge, when it's actually more of a novel application of an existing idea. Still cool, but not exactly Skynet ;)

For an example of using learning algorithms in simulation, here's a video highlighting an AI system learning how to walk efficiency with various physical structures.


But yeah, the hard part - and the interesting novelty of this particular implementation - is utilizing it on 'other things', than the robot/AI itself.

The approach is the robot has the size of each block, and their starting positions. It when moves to attach one to the other, then picks up the conjoined blocks and puts them into the 'test' space, where a camera can track how far it moves from the starting point.

Then, similar to the video put above, it then repeats the steps, but with a minor permutation of positioning of the blocks when it attaches them. If the blocks move farther/faster than the last test, it considers that an improvement and does iterations in that direction. If it does worse, it goes back to what it was trying before.

Eventually, it would glue the blocks together in an optimal arrangement to move as fast and far as possible(the article said it moved twice as far in the end).

The hard part is 'scaling it up'. They used an example of using this to have a welding robot arm detect defects and repair them, but that's kind of a different thing entirely.

That said, this could easily be utilized to optimize construction lines. Have the assembly line run a few iterations after it's set up and let the managing AIs optimize welding paths and the like.

So, it's not Skynet or anything, but could be incredibly useful for robots to find more and more efficient means of putting stuff together, which is good news for construction.