-Dragmire- said:
CrystalShadow said:
This is a pretty troublesome question when you start to look at androids and the like...
It can almost certainly be guaranteed that any android specifically designed for any given purpose wants to perform it's function.
A rather absurd example is Kryten from Red Dwarf.
Convincing him NOT to act like a slave was incredibly difficult, but even having accomplished that, convincing him NOT to basically do all your domestic duties for you anyway is almost a lost cause.
A less absurd example would be the film AI - Ignore the main character, and look at the other androids...
Yeah, you can see the issue here I hope?
The trouble with this is...
Well...
Ultimately it pokes uncomfortably on that line between slavery and a tool.
I have no problem using a hammer, but that isn't in any way intelligent.
Right now, I'm using a computer. This already is a much blurrier line, because although not intelligent in a way that we would define it for a living thing, the very purpose of a computer is effectively that it is a tool designed to do some of my thinking for me.
Is my computer my slave? Or merely a tool. An object I can do with as I see fit?
I remember a webcomic which was framed in terms of one of those 'tested to destruction' videos you sometimes see...
(unfortunately, finding it back is near to impossible. Thanks internet. XD)
The comic was set in the future where there were intelligent androids, but they were clearly treated as 'things' (specifically, for this example, the way you'd treat a computer, or your smartphone or the like).
So... This android gets turned on, introduces itself to it's new master (As it's programmed to do, clearly), who appears to be talking to camera, and narrating something about the features of this 'new model'...
... Before taking out a sledgehammer and proceeding to smash her, while commenting on the durability of the 'new model'.
... As she begs him to stop...
Yeah.
Anyway, the point is, it's a really blurry line when it comes to an artificially created servant.
Because, in a manner of speaking, we all already use those all the time without a second thought, there is clearly more to it than that.
When does it become wrong?
When does it go from being a tool, which you are free to use and abuse any way you see fit, to being something whose welfare you are expected to consider (think about horses, and dogs and animals kept for a purpose, not just as pets), to something that is morally questionable to treat as an unpaid servant?
It's really such a messed up question.
It's equally messed up when you look at the history of slavery and the justifications given for why it was acceptable to keep certain groups of people as slaves in the first place.
Because you can be sure that if you have slaves of some kind, you will almost certainly come up with a reason why it's OK to treat your slaves however it is you are treating them...
Imagine if your computer suddenly became self aware and merely wanted to continue to work as it usually does. Could you bring yourself to; take it apart to upgrade/fix it, turn it off, even use it at all?
Yeah, I'd have a hard time knowing what to do with that.
I mean, for one thing, I have yet to own a computer that is stable enough not to require restarting semi-regularly.
But if it was self-aware, and could make that fact known to me...
I'd probably think twice about turning it off, that's for sure...
If it wanted to continue doing what it's always been doing for me, I guess I'd be OK with that?
It'd be such a weird situation though that I'd be at a loss as for what to do.
It also would raise a lot of questions.
What actually happens if I turn it off? Would that be akin to killing it, or will it just go right back to what it was doing before I turned it off, and thus make it more like sleep, or perhaps a coma?
Following on from that, if turning it off isn't effectively fatal (it's going to turn off sooner or later no matter what, because I don't have it attached to a UPS, I have power failures roughly every 1-2 years, and there's basically no practical way to transfer it from one power source to another while it's still running...)
Anyway, following on from that, What is the nature of what is making it self-aware? Is it purely software? Or is there an aspect to it dependent on that specific hardware combination?
If losing power isn't fatal to it, what would a hardware upgrade do? (and does it matter which parts are replaced? Eg, could I replace the GPU without issue, but replacing the hard drive and CPU would irrevocably alter it?)
So many questions...
Then again, I
have considered some of them before. Because of an Interest in AI programming, and feeling somewhat responsible for what I would be doing in creating AI...
The difference is, I can reasonably safely assume that if well-designed, any AI I could come up with is purely software, and can be paused, saved, loaded, copied, etc. with little long-term direct consequences to it's well-being.
BUT, I have considered... If one of my AI's turned out to be sentient, would it be ethical to shut it down, even temporarily?
Given that it takes memory and processing time to keep the AI running, what responsibility do I have towards it to provide it with both to keep it running?
And when considering something you created yourself, you get into the 'god' dilemma.
(This isn't really a question of whether there is a god or not, but the practical considerations of effectively 'being' 'god' to some other kind of creature, by virtue of having created it.)
Having created an intelligent, sentient being of some kind, do I have any responsibility to design it in such a way that it can have something akin to an afterlife?
Or is that irrelevant?
Is it OK to design such a system knowing that at some point it will have some kind of failure, and to just let it fail and cease to exist? Or should I design things to try and keep it as safe as possible?
What of AI which are obsolete? Is it OK to just 'kill' them? Turn them offline forever? Or... Even outright delete them completely?
If not, assuming they literally serve no further purpose, how much runtime and memory do you expect to be able to devote to keeping them running anyway, even though they are no longer necessary?
It's terrifying, and that isn't even really directly about 'slavery' anymore... XD