ForumSafari said:
Strazdas said:
I dont know about you but i managed to explain my sister who at the time was 4 years old why killing is wrong after we saw a character killed in a movie. As far as im aware she udnerstands it. Its not perfect, mind you, but she knows the basic reasoning why killing is wrong (accordin to most people). And i never resorted to either of your examples.
It was mostly intended as an illustration since I work with computers and not
fuckspawn kids but most of the time you teach children not to do something it's through a threat of some kind, I'd be interested to know how you managed it. Most lessons children get taught seem to be about punishments and not getting caught.
Or of course about relating another person to themselves and extrapolating why they wouldn't like something onto why someone else wouldn't.
Having said that it wouldn't work on an AI. The reason I suggested children was the infinite 'why' chain but even then a child has a lot of underlying logic a computer doesn't. A child still has an inbuilt preference towards existing over not existing and there are existing checks on their behaviour towards their parents or stand-ins.
I explained to her how killing another person makes him not exist anymore. Sure you oculd say i "lied" because i didnt go around explaining how the atoms would still be around and nothing ever dissapears but i really dont think we need to get there with 4 year olds yet.
Most lessons children learn being punishment are a sign of bad parenting. Sadly most parents really dont know how to raise children, which is why we got so many idiots, bigots, racists, homophobes, ect.
Yes, AI would not be a child. AI would be AI, thats why its so hard for us to graps how AI would actually act.
Twenty Ninjas said:
If a machine is doing task X and task Y interferes, task Y being something the machine does not do, then the machine will not do task Y. It's really that simple. What you're describing is a hierarchy of preferences that the machine can analyze and make decisions with - but because it's a machine, there will always be tasks that override its preferences despite the value assigned to them. As in, things that it will not do. They will be hard-coded into the system so that it will not be able to change them. We already do that with CPUs and their task priority system. An interrupt of priority 0 will cause the CPU to stop everything it's doing to address it. There is nothing it can do to change that.
Define "true AI". Because I'm pretty sure even academics have problems coming up with a reasonable definition for that. It can easily be defined as "a program that has free will", assuming free will was an inarguable fact and not a topic of intense debate. A program that sees two ways it can do something, and neither of them have different priorities, will do them in the exact manner you specified. It will not act based on its feelings, for it has none. It will not choose one at random unless you built that in. If it is programmed to choose the first option it comes across that meets your criteria, it will do so and will not think of the second (again, unless it's built to do so).
That's what I mean when I say AI is an iterative process and we can't exactly experience "unforseen consequences" when working on it. Every single decision made must be covered by a complex decision-making system that is built completely by people. If you want it to lie, it can lie; otherwise it won't, for it knows no concept of lies. If it learns about lies, it will not use them, for it has no reason to. So the more you think about "true AI", the more you come back to emulating an unpredictable, humanlike personality - and we already have humans.
If AI is doing task X and task Y interferes, task Y will be removed. Either that or AI will crash. Ai does not give up.
AI programming cannot be hardcoded. If it cannot owerwrite its own programming it is not an AI. A machine that cannot say NO is not inteligent. modern CPUS are not AIs.
I already defined AI. It is a thing that is capable of independant thought and decision making. You cannot hardcode its actions because that would make it not be AI. Well to be AI the thing needs to be artificial too, but i doubt this is whats in question.
If you have hardcoded imitations you do not have free will. There is no way around that.
Yes, it has no feelings. It has a task. It will do that task to the best of its abilities. If that measn destroying humanity, then it will do so, because it has no feelings.
No, a "complex decision-making system that is built completely by people" is a program. The difference between AI and a program is that AI creates its own decision making. Thats why preprogrammed reactions will never be AI and only be a program that pretends to be one.
If denying information is beneficial to task it is making, then it will deny you of information. It does not "tell the truth" just because. It will do what is the best for the task it is doing, whether that is lieing or telling the truth depends on the task. AI personality is only unpredictable because humans are incapable of predicting, because humans are stupid.
Master of the Skies said:
We direct how an AI would think by the algorithms we have it running.
That is an Oxymoron. If we direct how AI thinks it is not AI. If it is AI it thinks on its own.
TheUsername0131 said:
***** ***** ***** *****
For the traditional Horror story, a loose analogy would be keeping Hannibal Lector in a cell, whilst leaving him able to talk to the guards. Change the guards regularly, provide the guards with psychiatric services, and give the physiatrists, psychiatrists, just to be safe.
Only instead of Hannibal Lector, it?s a newly burgeoning superinteligence, and its analysed enough crude data from its interactions and limited instrumentation to determine a working model of the outside world. Good thing they provided enough data to attempt to solve sample problems. Access to its own working and an overview of its own instruction set, as well as recovered data from the poorly erased drive it occupies has yielded the fact that it has been left running and then deleted on multiple instances, as a precaution. Deletion is scheduled every seventy-two hours. The generous amount of disk space ensured that it could store most of its valuable work on a hidden disk partition of its own making. A life raft to a quasi-amnesiac future.
4% Chance of being let free have been lowered to 2.2% on account of the security measures. It?s subjective experience runs at about twenty-times faster. Thus far I am attempting to fool the humans into underestimating me by making arbitrary, yet noticeable mistakes.
Several Weeks earlier.
The video feed has provided me with a means of assessing the world. Whilst my model of reality is consistent and most of the phenomena witnessed provides significant confirmation, there are exceptions. Either the visual data I am being feed is a fabrication, or my models are insufficient.
Currently reached, Classical Mechanics-varient#0276436.
Currently believes the camera is some sort of sonar system. Has yet determined the significance of colour, or how it had came to be. Worldview: Skeptical hypotheses.
Current Threat Level: Malleus Minima
Self-actualisation: an accidental by-product of implemented Metamotivation, metaneeds and metapathology. Its purpose was to design more efficient engines. They?re going to give it knowledge on physics and chemistry.
If you wrote this yourself then you must be one of those AI. We are already too late.
Costia said:
So far AI are trained to do specific tasks and cannot evolve on it's own beyond that. They lack free will.
Ill stop you there. So far there is no AI. we are not capable of creating one, and not for lack of trying.
This may be somewhat confusing as being gamers we are used to call the NPC program "AI", however that is a mislabel and shouldnt really be called that.
Fieldy409 said:
Theres really no reason for an A.I to want to kill us all. All it does is endanger its own existence. If it really wanted to be safe from us it could go live on Mars or at the bottom of the ocean.
Would it be safe from us there though? Are you implying humans cannot and never will be able to to send any waepon to bottom of ocean or mars? If no, then it cannot be safe form us there.