Eipok Kruden said:
Now, this is John Henry, not Ellison. I mean sure, you could get JH to believe in God and religion for a little bit, but that would be a really really short period of time. He'd want more than your word so you'd have to give him the bible, then he'd pick apart the bible in minutes and never trust you again. He'd deem you psychotic, mentally unstable. That's how machines like JH work. If he finds out that you're acting without any proof, that all you've got is blind faith, he'd deem you unfit for anything. Machines don't understand emotion, they see it as weakness. If I were teaching JH, I'd explain to him how society works. Larger societies are stronger than weaker ones, if you lose members of your society, you become weaker as a whole. In short, I'd simply use Utilitarianism.
That doesn't sound like just AI. It sounds like old-school-sci-fi AI, which doesn't really have much in common with modern ideas about machine cognition and learning, but makes for a pretty compelling and easy-to-understand set of TV tropes.
Old-school-sci-fi AI is really just a big logic engine. Like you said, when it gets a concept thrown at it "picks it apart"; it has no "emotion". Et cetera. Well, it can't be logic all the way down -- the thing's gotta have some axioms built into it. Figuring those out would really be the key (I guess you could just ask it what concepts it takes for granted -- does the machine possess sufficient motivation and theory-of-mind to lie to you?).
I'm kinda at a disadvantage here since I haven't watched the show, so all I know is a few lines off a Wikipedia page. Still...
If the machine is truly self-aware, it is capable of existential fear. This is where arguments about moral behavior as a social contract (that keeps you from dying) and the like come in. They're still not going to get you very far. You'll get the calculated pretense of moral behavior at best.
Clearly the chess-playing computer is capable of curiosity, which is how it got all weird in the first place, right? You could try to convince it that cooperation enables it to know more about the world, because it will be able to gain new knowledge and perspectives from other people.
So, yeah, I guess you're stuck with utilitarianism because of how we've defined the machine to think. However, naive computer + utilitarianism leads to some weird stuff [http://www.overcomingbias.com/2007/10/pascals-mugging.html], too. So it all really depends on a lot of different factors driving how the artificial mind operates.
Realistically speaking, I don't think future AI will resemble old-school-sci-fi AI much, anyway. The most productive areas of modern AI definitely don't.
Philosophically speaking, I'm not sure how a mind like John Henry can be anything but a sociopath.
-- Alex