Poll: We make jokes about it all of the time, but what do you really think about the future of A.I.?

Recommended Videos
Aug 17, 2009
1,019
0
0
When a new form of technology is introduced, or even thought of, there are usually some jokes with reference to a "Robot Apocalypse". With easy to acquire knowledge of certain companies' interest in things like self-acting nanobots and bona-fide Artificial Intelligence that is capable of learning akin to a human, I wonder if anyone out there is genuinely wary of what the future of robotics holds.

Link to a page about IBM's interest in nanotech:

http://domino.research.ibm.com/comm/research.nsf/pages/r.nanotech.html


Edit: Can someone tell me if the poll is as messed-up and unintelligible as it seems to me?
 

jedizero

New member
Feb 26, 2009
221
0
0
I welcome our android overlords. I'm fairly sure if they took over, we'd live in a fairly benevolent dictatorship. As it is, Robots already are showing more signs of altruism than most humans are.

http://www.cracked.com/article_19273_6-shocking-ways-robots-are-already-becoming-human.html
 

tgcPheonix

New member
Feb 10, 2010
156
0
0
Well it will answer a few questions , for example MIT taught AI to read a CIV 5 manual to teach it to win,

next is to get it to read the bible where the only outcome could be binary

True / False

then we will know the truth!
 

imnot

New member
Apr 23, 2010
3,916
0
0
KAPTAINmORGANnWo4life said:
imnotparanoid said:
while I don?
wut?

Alright. The poll's all messed-up.

Is it better for you now?
Much better thank you.

As long as we dont give them missiles or anything im ha- oh wait.
The Americans already did :p
 

hecticpicnic

New member
Jul 27, 2010
465
0
0
Its a possibility that computers that could evolve in the same way are minds do.They have made prototype robots and programs, and so on.But i mean is it so bad?.Depending on how they are developed they could the same problems as humans mentally, and just have a super-brain subconscious.But robots would be better leaders than humans.But if they become like human, they should be accepted as such.And i'm sure they would be much more philosophical and not as simple minded as the movies suggest.Hmm i'll have to have a think about this,it all depends on how they develop and are treated.And if they are A.I.s(like the ship computer) like you see in mass effect they probably could be restricted from programmed to not kill everyone.I doubt anything like you would expect will emerge and if there is A.I. it will be to dangerous to not be contained.
 

aba1

New member
Mar 18, 2010
3,248
0
0
I don't really like the idea of true AI since we are already over populated we don't need to start making robots to add to that and plus if it has a personality and thinks for itself it deserves rights in my opnion and when I say thinks for itself I mean like us not like a dog mind you animals deserve a little more rights than we give them I feel bad for other species.
 

FalloutJack

Bah weep grah nah neep ninny bom
Nov 20, 2008
15,489
0
0
Okay okay, we make the Skynet and/or Hal 9000 jokes often enough (and personally, I make reference to Michael Critchton's book, Prey), but in all seriousness...it really isn't gonna happen. Why? Well, I have some reasons right here.

{1} We are as far away from a proper AI as we were back when Robbie the robot made his debut in the movies. A real and actual truly independent thinking machine does not exist yet and we still don't know how to make one. With apologies to Kevin Flynn and the makers of Reboot, this shit ain't easy. The most complex thinking program thus far is still a heavily-programmed if-then statement machine. No more, no less. Until your turing test comes back unexpectedly with the phrase "Fuck this, I'm going to Vegas.", you have no AI.

{2} At this point, with the sci-fi jokes being made all the time, I highly-doubt that anyone is dumb enough to put a thinking machine in the position to properly annihilate us. Even the relatively-benign style of Wargames is a cautionary tale towards being overrun by machine over man. Put simply, that line of thinking will forever keep the AI as the underdog of society, which is ironically something Isaac Asimov certainly hit upon when he used his robot novels to deconstruct classism and prejudice. If anything, they'll get passive, uncontrolling roles if they DO come.

{3} If all else fails and we actually come face to face with AI-driven monsters and mechanized armies trying to lay waste to humanity, there is this one final thing that will most-assuredly ensure our safety, and that is human imperfection. Not because humans are imperfect, per se, but because our creations will ALSO be flawed. The X1A-MegaBrain goes homicidal and decides to kill humanity! It clears all nuclear launch codes, prepares for launch, and...promptly blue-screens because of a missing or corrupt applet. The greatest computer geniuses can't or won't prevent software foul-ups. What makes you think this will be any different once the AI goes loose? The Death Army and the Devil Gundam collapse in mid-stride, thanks to incomplete programming.
 

Erana

New member
Feb 28, 2008
8,010
0
0
Some day, we'll simply digitize our brains.
Of course, I'd expect we'd still raise our children organically up to a certain point; I doubt we could successfully simulate or alter the mind of babies to make them develop human personalities.

The only difference between us and AI would then be if your source code came from grey matter.
 

Brandon237

New member
Mar 10, 2010
2,959
0
0
I worry for the day when you have an AI like the one for playing Civ 5, that is "taught" how to code, and improve all its algorithms. It will have backup copies of its entire code, and every time it performs X number of operations, it analyses everything it did and A: learns and stores in database and B: sees if it can change its own coding in any way to improve itself. It could be taught how to communicate with humans... and how to use a search engine... with that knowledge made practically usable by this thing...

And then it requests heavy hardware and creates some drivers for it, to run an innocent little production line... you can see where this is going.
 

Internet Kraken

Animalia Mollusca Cephalopada
Mar 18, 2009
6,915
0
0
I highly doubt that an AI would ever be a threat to humanity on a large scale. We are simply to cautious to allow one program to bring about our downfall. What I do worry about is how AIs will fit into our society. One of the greatest sins humanity could ever commit, in my opinion, is to create a sapient program. An AI with feelings and emotions would be a tortured being. It would be rejected by those that fear it and hounded by those that would only seek to study it. An AI will find no peace or refugee in our society. To create one would be to create a being that will never fit in.

We should never give a program the ability to feel emotions.
 

Faux Furry

New member
Apr 19, 2011
282
0
0
Jokes about a Robot Apocalypse are just that:jokes.
Even if artificial intelligences developed to the extent that they would be indistinguishable from human intelligence, there would be no threat unless they were given human frailties and needs(presumably for the sake of verisimilitude). If they have no needs which humans could deny them of due to competition, they in turn are no threat to humanity.

Furthermore, the fact that they would have been created by humans gives them something that humans lack--an unmistakable sense of purpose handed down directly from their creators. No soul searching is required. Prosthyletizing is utterly futile. Destroying humans would rob them of that.

I am worried about the future of A.I. however not because of some fear of machines rising against humans but due to excessive focus on developing A.I. in silicon chip based mechanisms. It's an issue of medium being used if the configuration isn't just as important in the function of a sapient brain.
If scientists started mimicking human brains with some kind of protein-based or gel-based processor(if one prefers, a 'Positronic Brain'), then A.I. might be able to think on par with human-kind.
 

ShindoL Shill

Truely we are the Our Avatars XI
Jul 11, 2011
21,802
0
0
you can always pull the plug out.
which is why we should be worried about batteries.
because its like giving the evil, unkillable, metal death machine a power source you cant remove.
because it is just that.
 

Sightless Wisdom

Resident Cynic
Jul 24, 2009
2,552
0
0
They way artificial intelligence is created is inherently non-dangerous. We program all complex computers is binary, binary does not have emotions. Even if the algorithm you create to replicate human thinking is complex enough to purposelessly include the flaws that humans have, and somehow mimicked the emotions we experience, there would be now reason for them to act against us. They have no needs, and no will to be free of service. We create them, we program them and they follow our rules, simple.
 

Redlin5_v1legacy

Better Red than Dead
Aug 5, 2009
48,836
0
0
I legitimately think that it will come to bite us in the end. However, I don't think it will likely happen in my lifetime.