What kind of "previous examples" are we talking about? If you use the Supreme Court as an example, not only do you get an AI bound entirely by laws (which are really nothing more than majority prejudices), but you also risk it deciding that things like Dred Scott should count for more than things like Brown v. Board. Religion: same basic problems, regardless of which one you pick. No moral system is perfect, and very few are internally consistent.wahi said:well i guess there should be something like asimov's rules in place.
and then there should be some sort of a probabilistic model or something, preferably one that includes machine learning so that the ai can learn from previous examples. and somehow teach the ai all the judgements passed down by the supreme court or something like that.
of course, this is only so good, how can we convert a morally ambiguous situation to machine code that the ai can understand is in my opinion a much bigger problem. so imo only an ai that can pass the turing test can be taught morality. not exactly qed, but i am quite sure of this![]()
I say that our hypothetical AI might not need to be taught morals, depending on how we go about creating it. Most of us are assuming that the AI is created by using some deep understanding of human psychology and creating a program to copy it. But I say that it would be more likely that the AI would be created by taking a pile of code, placing it in a virtual environment, and allowing it to self-replicate with modification. Something would eventually arise that would be sentient, and probably have the same (or at least similar) pack instincts to human beings, and the pack instinct is the foundation of virtually all morality and ethics.