So, about those Stress Test/Destruction videos of phones etc. everyone seems to love....

Recommended Videos

Scarim Coral

Jumped the ship
Legacy
Oct 29, 2010
18,157
2
3
Country
UK
I don't see why we would have to worry about it now, leave it to the future kids to worry about the implication of what is a human or what is a machine debates!

I find it funny how we assume we will have "android" in the future. I'm not saying we won't but I can assume there would be a massive debate/ argument on wheather or not we want to build an android with real AI and build them in mass production for the public to have. That sort of debate would last for many years before any real production is made.

Since we are to dicuss about it however, I can still see people beating their "android" but it won't be a stress test and even then the company would perform a "stress test" (it's a standard to all product) anyway but not recorded and released to the public.

I can also assume the like of "will it blend" and "stress test" video will be forgotten if the likes of those android are made due to the many controversial it will produce. Even then it probably up to the viewer definition on the uncanny valley including the guy appearing in the stress video if he or she think he or she is beating up a machine or an human like being.
 

bartholen_v1legacy

A dyslexic man walks into a bra.
Jan 24, 2009
3,056
0
0
Ehhhh, yeah, no. Someone's been watching too many robot anime series.

I just don't see it. Robots are tools. We use them for a purely practical purpose. A robot will execute its task to the best of its abilities no matter its surroundings or circumistances. What point is there in having a robot that can feel pain the same way humans do? Isn't the whole point of machines to remove the physical limitations of humans so the tasks they're used for can be performed around the clock? Imagine it like this: if we had android cleaners, what purpose would it serve that they felt pain every time they stepped on a small Lego piece? Or accidentally pricked their finger while washing knives? Or accidentally slammed the toilet seat down on their fingers while cleaning it?

And that's just the physical component. What about emotions? What purpose would they serve? Ethics? Morality? By whose definition would we implement those? And even if we had all that, how massive advances would it require for a robot to actually pass as a human in all ways physical and psychological? That's not just about passing the uncanny, valley, it'd have to include all the faults of the human hardware too: sweating and other excretions, loss of balance, stuttering, flow of logic and lack of it, disorientation, etc. You might be able to make the most lifelike robot ever made who behaved exactly like a human, but when you'd have it run 10 marathons in a row literally without breaking a sweat and quote the entirety of James Joyces "Ulysses" straight up, I doubt many would still look at it as human.
 

Mr.Mattress

Level 2 Lumberjack
Jul 17, 2009
3,645
0
0
Considering I dislike breaking my stuff, or watching other people break their stuff, that comic was a little "what the...?", but since I didn't understand what was going on until afterwards, I thought it was more a personification of a Phone or something. I didn't realize she was supposed to be a Robot.

Now, if we do develop robots with free will and the ability to feel pain, I think they'd fall under the same protections humans have; especially if we make the Robots nearly identical to Humans, although robots akin to Wall-E would also probably (hopefully) be protected as well.

I dislike breaking things though, so yeah... I don't really watch that stuff...
 

FPLOON

Your #1 Source for the Dino Porn
Jul 10, 2013
12,531
0
0
Well, it's a good thing I don't subscribe to that dude or watch any of his videos... The last thing I would want to do is complain in the video's comments about something I knew was going to dislike in the video, especially if I'm come off as disrespectful the the fans of said video... #sarcasm

Anyway, the only way to avoid making this seem like a inhumane crime is to not make any androids have any pain receptors and/or human-based emotions outside of their prime directive[footnote]Cyborgs are the exception because they're already part-human to begin with... Plus, I love the character of Cyborg in Teen Titans/Justice League...[/footnote]... Now, if these androids evolve themselves that gives them these functions (and then some), then that's another story that only the generation that experiences that phenomena on a much more expansive scale can handle that potential situation properly... Right now, all we can do is use the technology that we know/have now as tools to make society a better place for all in some way, shape, or form... I'm sure it's what the future would want when they read their history-related books regarding the years before their time...
 

TranshumanistG

New member
Sep 24, 2014
77
0
0
Dirty Hipsters said:
What's the purpose of creating a machine that feels pain? What would be gained from giving pain receptors to machines that could not be achieved in another way?
bartholen said:
Ehhhh, yeah, no. Someone's been watching too many robot anime series.

I just don't see it. Robots are tools. We use them for a purely practical purpose. A robot will execute its task to the best of its abilities no matter its surroundings or circumistances. What point is there in having a robot that can feel pain the same way humans do? Isn't the whole point of machines to remove the physical limitations of humans so the tasks they're used for can be performed around the clock? Imagine it like this: if we had android cleaners, what purpose would it serve that they felt pain every time they stepped on a small Lego piece? Or accidentally pricked their finger while washing knives? Or accidentally slammed the toilet seat down on their fingers while cleaning it?

And that's just the physical component. What about emotions? What purpose would they serve? Ethics? Morality? By whose definition would we implement those? And even if we had all that, how massive advances would it require for a robot to actually pass as a human in all ways physical and psychological? That's not just about passing the uncanny, valley, it'd have to include all the faults of the human hardware too: sweating and other excretions, loss of balance, stuttering, flow of logic and lack of it, disorientation, etc. You might be able to make the most lifelike robot ever made who behaved exactly like a human, but when you'd have it run 10 marathons in a row literally without breaking a sweat and quote the entirety of James Joyces "Ulysses" straight up, I doubt many would still look at it as human.
It's funny that bartholen mentioned robot anime, because I'm actually watching Plastic Memories which is set in the `near future` where many people who have lost a close family member, or cannot find a companion use services of a company that provides them with a life-like androids that possess human emotions called "Giftia.". Essentially they substitute human interaction.

I think you'll agree that social isolation has been a growing trend nowadays, even with, and sometimes aggravated by the advances of communications technology. I don't think that it's a stretch to think that some people might consider getting an android, or even a social service assigning one to someone, like an old person who lives alone or an orphaned child(both examples nicked from the anime).

Now, as with virtually all manufactured products, there will always be connoisseurs, going like "Bleh, the manufacturers didn't model the real physical processes closely enough, so the whole things looks nothing more like a cheap imitation." and "If you can't tell the difference between 1mil and 2mil sweat glands, you are blind!", and "There is a 1.5% deviation in the fart. As a consumer this is completely unacceptable! Many people including myself are highly sensitive to this and quickly become sick and disoriented.", so there likely will be incentive to go as close to the real thing as possible.

About super-human capability: in the anime the androids have built-in limiters to avoid damaging itself or people around it. I think they could contribute to alleviating the uncanny valley effect as well.
 

GoodOmens

New member
Apr 23, 2011
54
0
0
Dirty Hipsters said:
What's the purpose of creating a machine that feels pain? What would be gained from giving pain receptors to machines that could not be achieved in another way?
We feel pain to identify damage to our bodies. A sentient machine would need a similar capability.
 

Marik2

Phone Poster
Nov 10, 2009
5,462
0
0
Ex Machina had a situation like that when that guy was building AI women and psychologically torturing them to see how they would respond.

That movie had some very disturbing moments.
 

Atrocious Joystick

New member
May 5, 2011
293
0
0
There´s no reason that we would have to limit artificial intelligences in the same way we ourselves are limited. If we have to technology to make something that is just as intelligent and aware at us it makes sense that we would have at least some ability to define what the parameters of this existance is. Even if we give our intelligent broom creativity, humor, etc there is no reason to give him a desire to grow as an individual and to be something other than a broom. He wouldn't need any of that because he has only one purpose, to be a broom and he has no need to go to new places to find food because he knows all his energy comes from an outlet in the wall. It might seem a horrid and pointless existance to us but we are intelligent apes, not intelligent brooms.

Then again, why even give Broomie his own unique intelligence? Why have ten million intelligent brooms when you could have one central broom intelligence that can control all brooms that it is connected to? With enough processing power this central intelligence might still be able to have a personal relationship with all of its clients, at the same time. That would eliminate any moral dilemmas that comes with replacing an old model, there would just be a handful of central intelligences and thousands of remote controlled units. It could be connected to the internet and have access to any database the company can pay for and as such would essentially know all and through having access to all those databases and maps pretty much see all. Our very own broom god.
 

kris40k

New member
Feb 12, 2015
350
0
0
GoodOmens said:
Dirty Hipsters said:
What's the purpose of creating a machine that feels pain? What would be gained from giving pain receptors to machines that could not be achieved in another way?
We feel pain to identify damage to our bodies. A sentient machine would need a similar capability.
You missed the "in another way" part of Dirty Hipsters's question.

A machine could have a case system where if a body part was exposed to temperatures that increased above a threshold that it knows it must respond within a given amount of time, adjusted for other safety conditions of course (no accidentally slapping bystanders heads' off) to rectify that or else the body part will be damaged. There is no need for it to feel what we would call "pain or discomfort" for it to know that it needs to take its hand off the stove. The ability to monitor its conditions, knowledge of its own physical limitations or thresholds, and appropriate response to changes in those conditions is enough for it to avoid damage.

Making it feel "pain" is actually a bad idea as pain can debilitate someone preventing them from being able to properly avoid further harm.
 

SKBPinkie

New member
Oct 6, 2013
552
0
0
Well, that was astronomically fucked up.

I feel that if you give something the ability to feel emotions, pain, a will to live, etc in the same that a human can, then you should treat it like you do any other person. Does it matter that it's not human? No, if something has the ability to suffer and you can't empathize with it, you're just a monster.

As to the people who're asking "why can it feel pain?". Well, why do we feel pain? Cause we need to know when something's wrong. If you fracture your toe when you repeatedly hit it on that one table leg, you should know about it.
 

DoPo

"You're not cleared for that."
Jan 30, 2012
8,665
0
0
GoodOmens said:
Dirty Hipsters said:
What's the purpose of creating a machine that feels pain? What would be gained from giving pain receptors to machines that could not be achieved in another way?
We feel pain to identify damage to our bodies. A sentient machine would need a similar capability.
So, let's see - you have a sophisticated machine which has enough built into it to be able to monitor itself for damage. Yet, when damage is detected, a "pain" event is dispatched, instead of "[WARN] Damage to [%s] detected". Why? Do you also suggest A/V to work emit a "something may be wrong" sound instead of how they currently operate? How about processors cust continually beeping when temperature thresholds get breached, instead of activating the countermeasures they have?

No, that's wasteful. You have the information already, use it.

bartholen said:
Or accidentally slammed the toilet seat down on their fingers while cleaning it?
I'm disturbed by you bringing this up. Does it happen...erm, ever? Let alone enough times to make it a real threat worthy of being mentioned.
 

Dirty Hipsters

This is how we praise the sun!
Legacy
Feb 7, 2011
8,802
3,383
118
Country
'Merica
Gender
3 children in a trench coat
kris40k said:
GoodOmens said:
Dirty Hipsters said:
What's the purpose of creating a machine that feels pain? What would be gained from giving pain receptors to machines that could not be achieved in another way?
We feel pain to identify damage to our bodies. A sentient machine would need a similar capability.
You missed the "in another way" part of Dirty Hipsters's question.

A machine could have a case system where if a body part was exposed to temperatures that increased above a threshold that it knows it must respond within a given amount of time, adjusted for other safety conditions of course (no accidentally slapping bystanders heads' off) to rectify that or else the body part will be damaged. There is no need for it to feel what we would call "pain or discomfort" for it to know that it needs to take its hand off the stove. The ability to monitor its conditions, knowledge of its own physical limitations or thresholds, and appropriate response to changes in those conditions is enough for it to avoid damage.

Making it feel "pain" is actually a bad idea as pain can debilitate someone preventing them from being able to properly avoid further harm.
Thank you, this was my point exactly (but one which I could not properly articulate at the time because I suck at typing on a phone).

Considering you've already covered that pain isn't a good indicator of danger for machines I'll take things a step further and explain why pain is a poor indicator of damage and danger in humans in general.

In theory when your body feels pain it's sensors telling our brains that our body is doing something wrong and that we need to stop doing whatever action we're doing. In reality pain does work quite like that. Sometimes we feel physical pain and discomfort when we're doing something that's good for our bodies, and sometimes we feel pain when there is nothing we can physically do to fix the source of the problem.

Take for example exercise. Anyone who has ever worked out knows the feeling of pain and soreness that comes after. You get this feeling because when you work out you strain your muscles causing tears, which are then rebuilt by your body and made stronger. This is something that is good for your body, but despite this fact you still feel pain. Basically in this case your body tries to physically condition you to stop doing physical labor even though it's making you stronger and extending your life. Conversely your body often times rewards you for doing things that are bad, like overeating, which is the reason there are so many people who are overweight and unhealthy.

Then there's the problem that pain doesn't stop whenever you stop doing whatever it was that was causing the pain. Say you stub your toe, even if you stop moving and putting weight on that foot your toe is still going to keep hurting long after you've stopped causing injury to it. How is this pain helpful to stopping you from injuring yourself? It isn't.

Also there's the problem of you feeling pain in situations where the pain can't be resolved. You have a headache, what can you do about it? Pretty much nothing. You can mask the pain with drugs but you can't actually stop the source of the pain, all you can do is wait for it to end. In this case the pain is completely unnecessary because there is no way for you to stop whatever it is that is causing the problem, so you feel (sometimes debilitating) pain completely pointlessly.

The combination of these things and many others makes physical pain kind of a lousy way to diagnose problems within the body. It's imprecise and can trigger completely unnecessarily. It's useful to humans because we have no other way to diagnose problems within ourselves, but considering the problems with pain, if we were to design a robot that was self aware we'd most likely find a much more practical solution to the problem of how to let it know that something is wrong with it.
 

Aerosteam

Get out while you still can
Sep 22, 2011
4,267
0
0

Seriously, programming something electronic so it feels pain when damaged is one of the dumbest things you could do. Just... what is the point of it? So when it's gets broken it would know about it? So what's the point in that? I'm the owner and likely I would be the one to damage it anyway (not saying I'd use a sledgehammer but you get the idea).
 

Zaeseled

New member
May 17, 2011
169
0
0
bartholen said:
...it'd have to include all the faults of the human hardware too: sweating and other excretions...but when you'd have it run 10 marathons in a row literally without breaking a sweat...
Maybe it's just me, but making an android that would disable sweating only while in a marathon seems really silly to me.
 

bartholen_v1legacy

A dyslexic man walks into a bra.
Jan 24, 2009
3,056
0
0
Zaeseled said:
bartholen said:
...it'd have to include all the faults of the human hardware too: sweating and other excretions...but when you'd have it run 10 marathons in a row literally without breaking a sweat...
Maybe it's just me, but making an android that would disable sweating only while in a marathon seems really silly to me.


That's not what I meant. The point was specifically that even if you had the prefect imitation of a human, if a robot would lack sweating and quote an entire book straight out of memory, it would not seem human to us. Some people can quote entire books aloud straight, but they're still people who eat, sleep, shit and sweat, and that's what makes that accomplishment so astonishing to us. But once you have a literal machine that can do anything and everything perfectly and without a single misstep, it ceases to be amazing and starts to seem frightening.
 

Zaeseled

New member
May 17, 2011
169
0
0
bartholen said:

Maybe it's my common sense powers activating, but I don't see any horses in 100m sprints. There are horseraces if you want to include horses, and dog races for dogs. There'd be a new competition exclusively for androids.
 

mad825

New member
Mar 28, 2010
3,379
0
0
tippy2k2 said:
It's scary to think that this is a very real possibility within my life time.
Like some other posters have said; it's unlikely. We still haven't found a replacement for transistor technology, there are a few candidates but none of them are reliable enough. I know that people like to blabber about moore's law but it's fact and only a matter of time.

OT:Considering that we have to tell computers that 1+1=2, I doubt we'll have anything that thinks for itself not even including for them to think outside the box.
 

CrystalShadow

don't upset the insane catgirl
Apr 11, 2009
3,829
0
0
Yeah... I hadn't considered that specifically, but I have been thinking about another issue to do with the idea of robots and androids and the like...

Slavery.

I kept throwing around an idea for a thread, but couldn't quite bring myself to do it.

But in short, we seem to love having slaves.

We have a long history of keeping other humans as slaves (and enslaving animals for various reasons as well)
While in recent history the first thing that comes to mind is declaring people of a certain race to be slaves, slavery has a much longer and more diverse history than that...

Of course, we've gone beyond that, right?

... Or have we?
See...

This idea of destroying an intelligent android, (that can clearly feel pain, and think to some extent) is merely one facet of a wider concept.

The reason why this could come about is not merely a natural extension of what some of us currently do with our posessions and gadgets, but in fact, an extension of the idea of ownership.

I own my computer. But... If my computer was an intelligent being in it's own right, would I have any more right to claim ownership of it than I would to claim I owned another human being? (As in, as slave?)

For that matter though, since an android is created and programmed with a specific purpose in mind (hypothetically), if you ask it, it may well say it enjoys it's function, and lives only to please...

But even so, does the fact that it is happy for me to 'own' it, and do whatever I ask it to, make it any less of a slave?
And does the fact that I own it, and it is a machine, technically actually give me the right to do whatever I please to it? (such as the example given in the OP)?

Dirty Hipsters said:
The combination of these things and many others makes physical pain kind of a lousy way to diagnose problems within the body. It's imprecise and can trigger completely unnecessarily. It's useful to humans because we have no other way to diagnose problems within ourselves, but considering the problems with pain, if we were to design a robot that was self aware we'd most likely find a much more practical solution to the problem of how to let it know that something is wrong with it.
I think your description of the situation kind of answers itself in some ways. Not that it's a useful mechanism compared to the possible alternatives, but what all your examples actually demonstrate is that pain is not a sensation of "this is bad, stop doing this", as it is "Something in this area is damaged! Watch out!"

Even the example you gave of tearing the muscles. - That may be good for you long-term, but in the immediate here-and-now, you have, in fact damaged your muscles.
Were it not for our innate regenerative abilities, that would definitely not be a good thing.
As it is, because our body does regenerate and regrow things in a way that improves them, this can be a good thing long-term, but it is still an example of you damaging yourself in the immediate sense.

Pain you can't do anything about is... Well, a consequence of a system that only cares about damage, and not specifically what to do about it.

I mean, in theory, you could implement a whole heap of other kinds of systems to take over this task, but it's very hard to say with a subject like this what that would look like in the end.
(or what it 'feels' like to the android itself. Which is an innately unknown factor)

Of course, when you consider this more closely, making an android feel pain has the hypothetical benefit of making a human being more sympathetic to what they are doing to the android.
In this case the function of pain in an android has nothing to do with what the android actually feels (if it feels anything at all), but rather, it's behaviour would be tuned to attempt to elicit a certain kind of emotional response from the humans around it.

Which, admittedly may not work in the slightest, but...
It's something that's worth keeping in mind.
 

FalloutJack

Bah weep grah nah neep ninny bom
Nov 20, 2008
15,489
0
0
The only proper thing to do in this case is that if you're going to stress test a thinking machine - should we ever get there - make it an inactive test model made only to record results, not be a sentient machine. Otherwise, you are indeed wasting the damn thing.