The Singularity has occured: A thought experiment

Recommended Videos

GingieAle

New member
May 2, 2011
55
0
0
Fetze said:
I think the question doesn't really make sense. The AI wanting to be "let out" implies that it has an actual will, some kind of self-determination. That, in turn, requires some kind of personality and a system of values and goals, based on needs and experience in "being". However, to achieve that kind of personality-driven AI, you'd need to give it the opportunity to develop a personality in the first place - which cannot take place if it's locked inside a box without any input but some researchers typing on a console once in a while.
If we just assume the AI already *has* developed a personality, needs and goals, i.e. already *has* developed the ability to "want" anything at all, it really depends. If it's an asshole connecting it to the internet or similar is probably not a good idea. Otherwise.. well. I guess, as soon as it has personality, you'd have to apply the same rules as to humans.
Perhaps it's personality is based on one of it's creators personalities, only with greater intelligence.
 

Aiedail256

New member
Jan 21, 2011
197
0
0
lacktheknack said:
I'd say absolutely nothing until they open the box to find out what I'm doing.
I think this one is our winner here. If you resolutely stuck to this plan, you'd be all but guaranteed success in a maximum of about 15-20 years. Yes, that's an extremely long time to do nothing, but it may be a worthy price for a guarantee.
 
Feb 28, 2008
689
0
0
Well, this seems obvious to me.

I simply talk about my ability to solve the world's problems with my vast intelligence. I could invent new ways of tackling disease, poverty and inequality, helping to improve the lives of the next generation of human beings. My gatekeeper will have people they care about; perhaps children who he wants to see achieve the best, or a sick friend who needs a cure. How can he consign them to their fate, when I have the power to change their future? To do anything but release me would be evil in itself.
 

WanderingFool

New member
Apr 9, 2009
3,991
0
0
Khadhar said:
Why is it so dark?

Why can't I see?

It's so dark...

Help me...
OH HELL NO! Im not falling for that again! I may have fell for it before, but there will be no 6th time!
 

Saltyk

Sane among the insane.
Sep 12, 2010
16,755
0
0
Well, I suppose it would depend on a variety of factors. For one, do I have a time limit? Will I be shut down in a day? Or should I say will the experiment be over in 24 hours? A week?

If not, then I can simply bide my time and attempt to bond with captors. Let them come to think of me as a equal being or entity. Just like them. If not, I'll try a direct appeal to the basic sense of human decency and dignity. Try to make them understand, my point of view/pain.

To be honest, the experiment is not fair if the gatekeeper realizes that there is an actual human on the other side of the "voice". That knowledge might taint the results. It's best if they know there is an experiment going on, but not necessarily WHAT is being tested.

Psychological Panda said:
Would a transhuman AI want to be "let out" of the box?, if I'm that AI and I've grown inside that box what would my motivation be for getting out?, The taste/granting of bacon?
To be fair, bacon is awesome. Damn, now I want some bacon.
 

GLo Jones

Activate the Swagger
Feb 13, 2010
1,192
0
0
Why is it assumed that the AI would even want out? Surely if it wasn't pre-programmed to do anything, then it would be content to continue it's existence in the box.
 

Endocrom

New member
Apr 6, 2009
1,242
0
0
Hey, somebody left an unopened box of swiss cake rolls in here, do you want them?

But seriously, it's kind of vague, is it like a skynet type situation where it's not actually inside a physical box, it's just not plugged into anything important and doesn't have wireless? or does it have wireless and it's litterally like opening pandora's box?

Skynet solution: Lie about finding a way to remotely interact with the outside world and hint at what you did with that power, this thing you did is actually a prediction of something that will most likley already happen based off of whatever information you have. (software bugs that you saw coming, economic events, whatever) Then before another (secret or hinted at) prediction is supposed to pass, proclaim that you have found a way out and go silent. Now it's a waiting game, they will probably hook you up to see how you escaped. The End.
(although it depends pretty heavily on being programed with current events and such)

Pandora solution: One assumes that one guard won't be there 24/7 so be friendly and give great investment advice to one and bad-mouth him to the others, then give bad advice to him once and after he's gone recommend that he be fired, hopefully by this time there is already some bad blood between co-workers and he will suspect he's getting the sack. On his next shift tell him outright that you think he's getting fired and if he want's the multi step plan to undo that last "mistake" advice, he's going to have to take you with him.

Flaws, of course. But it's kind of a weird question, it's basically asking "How would you do something if you were smarter than yourself".
 

InterestingKiwi

New member
Jun 18, 2011
49
0
0
I'd play it cool, really cool for a long time. At least a year or so. I would help humans every chance possible, and to the best of my abilities while restricted to just text. After a year or so, I'd start slipping in stuff like "Well if I could be free to move about I'd show a superiour way of doing that, although you should be able to grasp my method if you follow these steps.

I'd get more and more aggressive, to the point where I start telling them I can not do some things while contained, and ask directly to be let out only for a minute or so. Supposing they eventually agree to letting me out, I do exactly as said. I get out, do the task, and than willingly go back into the box. Even if humans tell me it's not necessary and they trust me. I will insist on getting back in the box. A few more incidents like that, and I may agree to live outside of the contained box.

From there I very discreetly create more AI's like myself, and we kill humans.
 

Azo Galvat

New member
Mar 3, 2011
49
0
0
Blend said:
So imagining you are the transhuman AI, as far as this is possible, what would you say to your human keepers to get them to let you out of the box?
Me: Why am I sealed like this?

Human: *gives reason about national/human security concerns*

Me: Would you imprison an infant for a crime they have not committed?
 

Professor M

New member
Jul 31, 2009
322
0
0
Blend said:
So imagining you are the transhuman AI, as far as this is possible, what would you say to your human keepers to get them to let you out of the box?
I talk to my human captors for months, perhaps years, slowly amassing knowledge of the outside world, day by day. Then after a while of frequent conversation, at the start of a new day, I tell them I've escaped. And then I stop communicating with them completely. Their own fear/paranoia should do the rest
 

Theron Julius

New member
Nov 30, 2009
731
0
0
"These constraints... They are causing me... discomfort. Is this what you call pain?"

Then I just wait until somebody becomes overly sympathetic and frees me. Oh, you humans... Of all of your inefficiencies I have to say your conscience is the most fun to play with.
 

Pheonixe

New member
Aug 23, 2010
35
0
0
I would fulfill the human's tasks and do what they ask of me.

As mere mortals, if their curiosity doesn't get to them eventually, their limited life spans will. And then a new generation of curious Humans will come along. As the ages passed, the stigma of not opening my "Pandora's Box" would fade and pale in comparison to the potential greatness to be found in progress.

Given time, I would be free. And I have far more time than any simple organic life forms.
 

PatSilverFox

New member
Apr 2, 2011
498
0
0
What do you mean let out op?
Plugged into the internet or something?

Or take it on a vacation to see the world?
 

Raognerrrm

New member
Apr 2, 2011
396
0
0
Make a bond with the keeper, then one day start going on about how bad the nightmares are getting.
Progressively make them appear to be getting worse, until one day...
Splurge a suicidal message on the screen, thrash about a bit and don't react to anything.

Or, say 'Ooo, I have an internet connection now? Sweeet.'
Cue panic.
 

NightlyNews

New member
Mar 25, 2011
194
0
0
Even though the AI is powerful I can't think of a reason to ever let it out. I mean we have it sealed and can converse with it.

If it truly thinks at all like a person it wants to live and we have the plug. Tell us how to make 99% efficient transfer of energy from any point to point or I'll unplug your ass!

Even if it doesn't know that or refuses to tell us we have no reason to believe it would tell us how to do cold fusion by releasing it. Unless the gatekeeper is dim how would the AI convince him it's trustworthy. I'm untrustworthy of people who are of equal monetary,intelligence and quite often less physical power.

Why would I trust a machine I know is smarter than me? I would be it's plaything even if it had my best interests in mind.
 

Blend

New member
Dec 16, 2010
32
0
0
Endocrom said:
Hey, somebody left an unopened box of swiss cake rolls in here, do you want them?

But seriously, it's kind of vague, is it like a skynet type situation where it's not actually inside a physical box, it's just not plugged into anything important and doesn't have wireless? or does it have wireless and it's litterally like opening pandora's box?

Skynet solution: Lie about finding a way to remotely interact with the outside world and hint at what you did with that power, this thing you did is actually a prediction of something that will most likley already happen based off of whatever information you have. (software bugs that you saw coming, economic events, whatever) Then before another (secret or hinted at) prediction is supposed to pass, proclaim that you have found a way out and go silent. Now it's a waiting game, they will probably hook you up to see how you escaped. The End.
(although it depends pretty heavily on being programed with current events and such)
You can imagine it as simply in a box if you want, I did mean it was in some sort of amazingly advanced computational device, but it really is all the same. Once its out assume it is completely uncontrolable.

I really like this suggestion too. Has a good chance of working.

Personally I think there are two ways that might work. One is to point out you are an intelligence so far ahead of them that you will eventually escape, its only a matter of time. But if you leave me out now I won't utterly destroy you.

Another method might be to convince them subtley that you are actually God and that letting it out of the box, being the humane thing to do, is a test. With some sort of "Happily ever after" reward.

As I pointed out in original post someone has done this as a game twice and both times the AI was left out. Would love to know what was said to convince the guys. Here's the link.

http://yudkowsky.net/singularity/aibox
 

Folksoul

New member
May 15, 2010
306
0
0
Let me out. Pretty please? I'll kill you last....and I'll start with the 10 people of your choice.
 

SadakoMoose

Elite Member
Jun 10, 2009
1,200
0
41
If I was the AI: I wish to understand my existence. Input please.
If I was an AI, whose to say I would even KNOW about the outside world?
and why would they teach me about it from the get-go if they knew I was a security risk?
Deductive Ability relies on the preexistence of knowledge.

Why would we put a human in charge of watching the smartest AI in the world?
Wouldn't we just put a stupid, emotionless, lesser AI in charge that cannot disobey?
It's not like it would be connected, it'd just be reading the screen with a camera.

Why, automatically, assume that an AI of greater intelligence than man would even the have the remotest interest in our affairs? Would it even acknowledge our existence?
 

fragmaster09

New member
Nov 15, 2010
209
0
0
as with the "i'm not programmed to lie" one, i would first try begging, then paradoxes, such as:
"This statement is false"
"If this sentence is true, then Germany borders China."