Poll: Threats of artificial intelligence, do we have to worry about it?

Recommended Videos

TheUsername0131

New member
Mar 1, 2012
88
0
0
Solution, replace computers with abacuses/abaci and slide rulers, and other purely mechanical contrivances. Let the humans carry out the calculations.



Paranoia fuelled horror.

Hope an invasive memetic ?being? doesn?t emerge hopping from mind to mind. Like a self-destructive ideology, system of economics, rubbish jokes and shifty films, a significant rise in existential angst.

Generally speaking, something that expands and turns other entities into beings like itself. Like emo?s or fans of newly revived Boy bands.
 

Bakuryukun

New member
Jul 12, 2010
392
0
0
I think the bigger threat to society than machines with human like intelligence, is the prejudice against such machines that's already being fostered before they are born.
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
Bakuryukun said:
I think the bigger threat to society than machines with human like intelligence, is the prejudice against such machines that's already being fostered before they are born.
I vaguely (fail) to recall, a couple years back the UK passed some legislation over recognising robots that attempt to claim human rights as citizens, or something like that.
 

MeChaNiZ3D

New member
Aug 30, 2011
3,104
0
0
1. We're a long way from sentient AI.
2. There's no particular reason AI would become genocidal any more than a race of logical humans would.
3. It's basically a matter of when to start putting the 3 Laws in robots rather than finding a solution.
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
MeChaNiZ3D said:
2. There's no particular reason AI would become genocidal any more than a race of logical humans would.
An inhuman intelligence that prioritises self-preservation and resource accusation will have no qualms with doing so if it determined that it was in its own interest to exterminate mankind.

MeChaNiZ3D said:
3. It's basically a matter of when to start putting the 3 Laws in robots rather than finding a solution.
Implying it wouldn?t circumvent such restrictions or have unintended consequences.
 

MeChaNiZ3D

New member
Aug 30, 2011
3,104
0
0
TheUsername0131 said:
MeChaNiZ3D said:
2. There's no particular reason AI would become genocidal any more than a race of logical humans would.
An inhuman intelligence that prioritises self-preservation and resource accusation will have no qualms with doing so if it determined that it was in its own interest to exterminate mankind.
That's true. What interests do you think those would be? I can think of far more situations where humans could be used more usefully.

MeChaNiZ3D said:
3. It's basically a matter of when to start putting the 3 Laws in robots rather than finding a solution.
Implying it wouldn?t circumvent such restrictions or have unintended consequences.
Yes, I am implying that. How would you go about circumventing the 3 Laws?
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
MeChaNiZ3D said:
Yes, I am implying that. How would you go about circumventing the 3 Laws?
In ways that were not immediately apparent, otherwise you wouldn?t have tried it.

If the law was so perfectly self-evident. then we wouldn?t have a need for lawyers. Claims of secure systems fail to regard the human error (and boasting) involved in its making.

MeChaNiZ3D said:
That's true. What interests do you think those would be?
I can only dream of those. Seldom would I discuss them. How would you convince a solipsist AI that you are real, and not just spamming its sensory insruments with fabricated data of an external world.

MeChaNiZ3D said:
I can think of far more situations where humans could be used more usefully.
Human dupes/proxies/unwitting accomplices, collaborators, playthings, farmed for the purpose of harvesting organs and tissue until it develops ubiquitous nanotechnology.
 

maidenm

New member
Jul 3, 2012
90
0
0
No one have metioned this comic? No? Okay then.
http://www.smbc-comics.com/index.php?db=comics&id=2124#comic

Personally I believe we have way more to fear from humans than we have to fear from AI, and I fear that's quite a lot. After all, an AI, no matter how well programmed and self-sufficient, is still a creation based on human logic.
 

Kipiru

New member
Mar 17, 2011
85
0
0
Artificial Intelligence will never overshadow Natural Stupidity as a global threat!
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
maidenm said:
No one have metioned this comic? No? Okay then.
http://www.smbc-comics.com/index.php?db=comics&id=2124#comic

Personally I believe we have way more to fear from humans than we have to fear from AI, and I fear that's quite a lot. After all, an AI, no matter how well programmed and self-sufficient, is still a creation based on human logic.

"A creation based on human logic," as opposed to what other logic? Do you even axiom?

Humans Are the Real Monsters TM


Kipiru said:
Artificial Intelligence will never overshadow Natural Stupidity as a global threat!
"There are some things that can beat smartness and foresight? Awkwardness and stupidity can. The best swordsman in the world doesn't need to fear the second best swordsman in the world; no, the person for him to be afraid of is some ignorant antagonist who has never had a sword in his hand before; he doesn't do the thing he ought to do, and so the expert isn't prepared for him; he does the thing he ought not to do; and often it catches the expert out and ends him on the spot."

- Mark Twain (1835 - 1910) American Author
 

___________________

New member
May 20, 2009
303
0
0
If the A.I. goes rogue we hit it with a stick until no more noise comes out of it. If the things controlled by the A.I. get bigger we use bigger sticks.
 

maidenm

New member
Jul 3, 2012
90
0
0
TheUsername0131 said:
maidenm said:
No one have metioned this comic? No? Okay then.
http://www.smbc-comics.com/index.php?db=comics&id=2124#comic

Personally I believe we have way more to fear from humans than we have to fear from AI, and I fear that's quite a lot. After all, an AI, no matter how well programmed and self-sufficient, is still a creation based on human logic.

"A creation based on human logic," as opposed to what other logic? Do you even axiom?

Humans Are the Real Monsters TM
That's exactly my point. We have no other logic to create an AI from, so anything we have to fear from AI is something we would have to fear from humans in the first place. There's almost nothing I'd fear from an AI that I wouldn't fear from a great organization of humans, and anything else I'd fear from AI's would come from the human error ie. poor programming that would make the AI unable to understand the "don't kill" command, malfunctions due to poor maintnace etc.

In short, I fear intelligence, artificial or not.
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
maidenm said:
TheUsername0131 said:
maidenm said:
No one have metioned this comic? No? Okay then.
http://www.smbc-comics.com/index.php?db=comics&id=2124#comic

Personally I believe we have way more to fear from humans than we have to fear from AI, and I fear that's quite a lot. After all, an AI, no matter how well programmed and self-sufficient, is still a creation based on human logic.

"A creation based on human logic," as opposed to what other logic? Do you even axiom?

Humans Are the Real Monsters TM
That's exactly my point. We have no other logic to create an AI from, so anything we have to fear from AI is something we would have to fear from humans in the first place. There's almost nothing I'd fear from an AI that I wouldn't fear from a great organization of humans, and anything else I'd fear from AI's would come from the human error ie. poor programming that would make the AI unable to understand the "don't kill" command, malfunctions due to poor maintnace etc.

In short, I fear intelligence, artificial or not.
So when looking for the greatest threat to human society, we need look no further than a mirror.
 

maidenm

New member
Jul 3, 2012
90
0
0
TheUsername0131 said:
maidenm said:
TheUsername0131 said:
maidenm said:
No one have metioned this comic? No? Okay then.
http://www.smbc-comics.com/index.php?db=comics&id=2124#comic

Personally I believe we have way more to fear from humans than we have to fear from AI, and I fear that's quite a lot. After all, an AI, no matter how well programmed and self-sufficient, is still a creation based on human logic.

"A creation based on human logic," as opposed to what other logic? Do you even axiom?

Humans Are the Real Monsters TM
That's exactly my point. We have no other logic to create an AI from, so anything we have to fear from AI is something we would have to fear from humans in the first place. There's almost nothing I'd fear from an AI that I wouldn't fear from a great organization of humans, and anything else I'd fear from AI's would come from the human error ie. poor programming that would make the AI unable to understand the "don't kill" command, malfunctions due to poor maintnace etc.

In short, I fear intelligence, artificial or not.
So when looking for the greatest threat to human society, we need look no further than a mirror.
Are you questioning me or are you just stating a fact? I honestly can't tell, it sounds like a question but there's no questionmark.
 

Jandau

Smug Platypus
Dec 19, 2008
5,034
0
0
AI and/or AI-like constructs will be very powerful once they are created, that much is certain. However, I don't think we should be looking towards Skynet for the form of that power. I think they'll be more like The Machine from Person of Interest (if you're not watching that show, get to it, it's really good) - linked to every datastream of any kind (phonecalls, e-mails, surveillance footage, instant messaging, etc.) and providing near omniscience to their controllers. Also, I'm not too worried about them going rogue - shackling them and building in failsafes will likely be one of the top priorities of anyone who might have resources to build something like that...
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
maidenm said:
Are you questioning me or are you just stating a fact? I honestly can't tell, it sounds like a question but there's no questionmark.
A statement.

maidenm said:
error ie. poor programming that would make the AI unable to understand the "don't kill" command, malfunctions due to poor maintnace etc.
A "don't kill" command?

Very well, but you'd be surprised what you can live through.
 

Lictor Face

New member
Nov 14, 2011
214
0
0
blackrave said:
Every time someone brings topic of murderous AI
I for some reason can't not remember Adam from Outer Limits (episode "I, robot")
Who says full AI will want to go into genocide mode?
Advanced AI-like software on the other hand can be dangerous due to its limited understanding

As long as we are not daft enough to put fully autonomous robots as slave labour.

Then nah, problem won't be that great.
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
Jandau said:
AI and/or AI-like constructs will be very powerful once they are created, that much is certain. However, I don't think we should be looking towards Skynet for the form of that power. I think they'll be more like The Machine from Person of Interest (if you're not watching that show, get to it, it's really good) - linked to every datastream of any kind (phonecalls, e-mails, surveillance footage, instant messaging, etc.) and providing near omniscience to their controllers. Also, I'm not too worried about them going rogue - shackling them and building in failsafes will likely be one of the top priorities of anyone who might have resources to build something like that...

Still largely dependent on the humans, just as farmers in the old days needed their animals to survive...

 

maidenm

New member
Jul 3, 2012
90
0
0
TheUsername0131 said:
maidenm said:
error ie. poor programming that would make the AI unable to understand the "don't kill" command, malfunctions due to poor maintnace etc.
A "don't kill" command?

Very well, but you'd be surprised what you can live through.
Was not meant to be only a "don't kill" command, was more meant to be and example of what the programmer could screw up. Could easily be replaced with "don't maim/kidnap/torture/etc". Sorry if I wasn't clear enough.
 

AngloDoom

New member
Aug 2, 2008
2,461
0
0
Though I have no evidence to support this, I believe that if there was a breakthrough in technology that allowed an AI to have even half the capability of a human brain, somebody somewhere along the line would make a kind of 'kill switch' or an 'off' button. I'm pretty sure most people have seen a film in which AI goes nutty, I'd assume someone who has an interest in creating human-level AI would have watched them avidly.

Even if they didn't, I don't imagine said creators would give the AI access to any resources that could be used against us, in the same way that we wouldn't give a child a handgun.