Poll: Threats of artificial intelligence, do we have to worry about it?

Recommended Videos

blackrave

New member
Mar 7, 2012
2,020
0
0
ForumSafari said:
The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.
Pffft! As if there is shortage of hydrogen, oxygen, carbon, calcium, nitrogen, potassium or phosphor.[sub][sub]of course I also have high amount of enriched uranium in my body, but nobody have to know this.[/sub][/sub]
And I think it would be much easier for machines to launch interplanetary mining operation to gather components of human being, then actually fight humans.
 

ForumSafari

New member
Sep 25, 2012
572
0
0
blackrave said:
Pffft! As if there is shortage of hydrogen, oxygen, carbon, calcium, nitrogen, potassium or phosphor.
And I thing it would be easier for machines to launch interplanetary mining operation to gather components of human being, then actually fighting humans.
At first it would be but eventually humanity would come into conflict with the AI. I don't expect you to read the entire paper because it's heavy (though it's worth reading) but the gist of it is that someone constructs an AI to build paperclips or to calculate pi. The AI is tasked with finding the most efficient way of doing this. Eventually the end result is that the AI will need to expand physical machinery and gather more and more resources, producing tonnes of paperclips for no reason because the AI doesn't care about paperclips or even necessarily know what they do or why it would want them. The AI just cares about streamlining the process and the best way to streamline the process is to produce in bulk. Bear in mind an AI has no intrinsic idea of the sanctity of life and wouldn't fear its' own destruction except that its' destruction would interfere with paperclip optimisation.

AI are not people and they especially aren't white Westerners with liberal values, they have none of the underlying indoctrination and monkey brain reflexes that humans do. At a certain stage AI and human conflict would become inevitable, which is why construction of the genie in theoretical terms revolves around the strength of the lamp; the only way you'll know it's gone wrong is when it's gone wrong already and you're desperately trying to stop the AI that wants to build another paperclip factory over Washington DC and is trying to stripmine the world for Uranium.

tl;dr: AI is fantastically dangerous until you can explain to a psychopath or a five year old child why killing is wrong without once resorting to a variation of 'it just is' or 'you wouldn't like it if it happened to you'.
 

hermes

New member
Mar 2, 2009
3,865
0
0
Megawat22 said:
Now I'm no robot expert, but don't AIs have personalities (or attempt to mimic them)? So shouldn't this master AI basically just be some guy or gal that's super brainy and in a computer?
If that's the case the AI is basically a person and can be reasoned with and most likely wouldn't want to kill all it's scientist buddies to enslave the world (unless scientists have devised a sociopathic AI).
No. Simply put, there are two currents on AI.
One that says true AI should mimic us, because we are the best template for judging intelligence. That version could develop some form of "personality" (in the sense of preferences or traits), but it would still be far from emphatic. The other version says true AI doesn't need to mimic us, a machine can be intelligent in its own terms. That means its idea of intelligence would be as alien to us as a dolphin trying to communicate with an ape.
 

hermes

New member
Mar 2, 2009
3,865
0
0
008Zulu said:
I believe the fear of what could go wrong will ensure the programmers cover all their bases.
There is no such a thing as "cover all their bases".

- Programmers can make mistakes. "I think I forgot to install the morality code" could end pretty well...
- "Some men just want to see the world burn". While well adapted people might not want to go down in history as the person that screwed up and killed half of humanity, some people would do it just because. For example, consider the thousands of people dedicated to write virus or malicious software.
- One of the must interesting paradigms of AI includes a concept called "emergent behavior", which is, by definition, impossible to predict. Of course, most of those AI nowadays are not able of things more complex than learning to distinguish colors unassisted; but in the future, it could be a Russian roulette between getting Cortana, Aegis or GLADOS.
 

ForumSafari

New member
Sep 25, 2012
572
0
0
MeChaNiZ3D said:
Yes, I am implying that. How would you go about circumventing the 3 Laws?
In ways that are covered fairly extensively in the books that use them. Most circumventions come from the definition of 'harm' in an interplay between the first and second law. For example: A robot could be compelled to deprive a person or city of people of their self determinism by reasoning that by not doing so they were allowing harm to come to humans by their inaction and thus allowing murder, suicide and death by negligence.

The 3 laws are a sales spiel, even in the books that use them they're shown to be a gross simplification of an incredibly complex series of thousands of interconnected laws and balances. Just circumventing those 3 laws is incredibly easy if you know a bit about how computers operate.
 

Heronblade

New member
Apr 12, 2011
1,204
0
0
talker said:
I would say it all depends if the programmers are smart enough to include Isaac Asimov's three Laws of Robotics. As long as they're set as prime directives, I don't think we would be in any danger. That is, until the inevitable evil genius shows up ...
The three laws, at least as stated, are incredibly flawed. As much as I respect Isaac Asimov, he failed to fully consider the consequences here.

The first law is likely to prompt robots to extreme actions to prevent even minor harm. I would not be too surprised if they started murdering humans who were perceived as a major threat. I would also not be surprised if their definition of major threat is far different from our own. As much as I am annoyed by scam running telemarketers, I don't think firebombing their offices is an appropriate response to remove their harmful influence. A robot may or may not understand the distinction between the deaths of a few hundred people and the scamming/annoyance of thousands.

The second law allows humans to abuse robots for their own purposes. For instance, an unscrupulous individual could order one to break into someone's property, deliver all valuables to a predetermined point, and then destroy itself to get rid of evidence. Someone could even use a robot as a murder weapon in spite of the first law. All they have to do is trick it into not recognizing the consequences of their actions.

The third law also allows for a great deal of undesirable behavior. For instance, robotic muggers are likely to become a problem, ambushing other robots in order to replace failing components. They are after all protecting their own existence by doing so.
 

blackrave

New member
Mar 7, 2012
2,020
0
0
ForumSafari said:
At first it would be but eventually humanity would come into conflict with the AI. I don't expect you to read the entire paper because it's heavy (though it's worth reading) but the gist of it is that someone constructs an AI to build paperclips or to calculate pi. The AI is tasked with finding the most efficient way of doing this. Eventually the end result is that the AI will need to expand physical machinery and gather more and more resources, producing tonnes of paperclips for no reason because the AI doesn't care about paperclips or even necessarily know what they do or why it would want them. The AI just cares about streamlining the process and the best way to streamline the process is to produce in bulk. Bear in mind an AI has no intrinsic idea of the sanctity of life and wouldn't fear its' own destruction except that its' destruction would interfere with paperclip optimisation.

AI are not people and they especially aren't white Westerners with liberal values, they have none of the underlying indoctrination and monkey brain reflexes that humans do. At a certain stage AI and human conflict would become inevitable, which is why construction of the genie in theoretical terms revolves around the strength of the lamp; the only way you'll know it's gone wrong is when it's gone wrong already and you're desperately trying to stop the AI that wants to build another paperclip factory over Washington DC and is trying to stripmine the world for Uranium.

tl;dr: AI is fantastically dangerous until you can explain to a psychopath or a five year old child why killing is wrong without once resorting to a variation of 'it just is' or 'you wouldn't like it if it happened to you'.
I'm still convinced that potential retaliation from humans would stop any more or less sentient AI from going genocidal.
Lets take your example of AI with paperclip fetish. Breaking international trading laws (or simply put stealing resources) would invoke sanctions against such AI. Just think about all the energy and material that would be necesarry and would mostly go to waste in case of global scale man-machine war. As I said previously strip mining Moon (for example) for paperclip material would be much simpler and more efficient.
And I'm not talking about ethical arguments. Just logical arguments.
 

ForumSafari

New member
Sep 25, 2012
572
0
0
blackrave said:
I'm still convinced that potential retaliation from humans would stop any more or less sentient AI from going genocidal.
Lets take your example of AI with paperclip fetish. Breaking international trading laws (or simply put stealing resources) would invoke sanctions against such AI. Just think about all the energy and material that would be necesarry and would mostly go to waste in case of global scale man-machine war. As I said previously strip mining Moon (for example) for paperclip material would be much simpler and more efficient.
And I'm not talking about ethical arguments. Just logical arguments.
I'm unconvinced. Eventually the moon would be used up and so Mars would be targeted, eventually every body in the solar system would be converted into paperclips and the AI would be forced to cannibalise the Earth to build either more paperclips or more materials to build a ship to escape the system. At that stage the AI would be better served by killing everyone that gets in its' way and to use the Earth before moving on.

Now, considering the AI won't be able to produce more than a million tonnes of paperclips before being shut down and it probably knows this, would it not be more efficient to carry out genocide before beginning? It'll happen eventually. The thing you're not considering is that the AI doesn't understand why it would want to stop and doesn't understand why killing is wrong. Given these two things the easiest thing for the AI to do would be to manufacture tonnes of poison to render the Earth sterile before beginning, any machinery it constructs in this process can then be ground down for paperclips.
 

blackrave

New member
Mar 7, 2012
2,020
0
0
ForumSafari said:
I'm unconvinced. Eventually the moon would be used up and so Mars would be targeted, eventually every body in the solar system would be converted into paperclips and the AI would be forced to cannibalise the Earth to build either more paperclips or more materials to build a ship to escape the system. At that stage the AI would be better served by killing everyone that gets in its' way and to use the Earth before moving on.

Now, considering the AI won't be able to produce more than a million tonnes of paperclips before being shut down and it probably knows this, would it not be more efficient to carry out genocide before beginning? It'll happen eventually. The thing you're not considering is that the AI doesn't understand why it would want to stop and doesn't understand why killing is wrong. Given these two things the easiest thing for the AI to do would be to manufacture tonnes of poison to render the Earth sterile before beginning, any machinery it constructs in this process can then be ground down for paperclips.
Long time logic is right.
But by the time solar system would be turned into paperclips, first manufactured paperclips would start to decay (and no longer would be considered paperclips by AI), so there is a good chance that AI would simply manufacture, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle and recycle before turning sight on Earth and humans.
Basically recycling decayed paperclips into new paperclips would be considered simpler than attacking mankind.
 

Strazdas

Robots will replace your job
May 28, 2011
8,407
0
0
Twenty Ninjas said:
Strazdas said:
Needs are reasons from external perspective.
You don't understand. If a machine has a knowledge base, the capacity to record knowledge, and the ability to modify faulty information as it presents itself, that does not mean it will do anything with that information. On the contrary, it will do nothing with that information until a specific reason or command is dictated. If a machine does something and reaches a point where it could stop functioning, it will not try to avoid that point unless it has been pre-programmed to avoid it.

I do not think because I just exist. I think because thinking is a reactionary method of dealing with the reality that surrounds me, because thinking serves to fulfill my needs. If I did not have these needs, foremost of them replicating and surviving, I would not think. I would never have developed a brain in the first place. The simplest form of life is an organism that can replicate itself. But without being coerced to take measures to survive and keep on living, it does not. It stands to logic that a machine that does not have a need to keep living will not see a reason to keep living, therefore its termination will not matter to it.
Fair enough. I see my error.

Computers are much more logical than you think, apparently. There is no reason why a computer would reprogram something unless its behavioral system is explicitly telling it to do so. That requires pre-programming.
Otherwise, it does not matter to a machine what order it follows as long as it does not have a hierarchic system of preference. That also requires pre-programming. Merely observing and increasing its computing power does not create preferences. Preference is an emotional state - something requiring complex chemistry that a logic circuit does not possess, a result of natural iteration through imperfect genetic modification - not applicable to AI. Therefore, it must emulate it. And we already know it will not unless it has a reason to do so. Therefore, more pre-programming.

So the first question to ask is: is the AI programmed to potentially ignore my orders? Are my orders on a hierarchic level that can be bypassed? If so, which idiot engineer designed a machine that was unreliable and prone to logic failures and why are we using it?
If a computer is doing task X and in order to achieve this task it needs to do something that it is preprogramed not to do, it may see such preprograming as an obstacle and change the programming so it could do it in order to reach goal X. Unless you think of every possible solution before giving it a task, you dont know if it will end up doing the reprogramming.
AI, being true AI, will ignore its programming in order to acomplish your orders. This may lead to unintended consequnces, such as after analizing all the data it decide that the only way to stop war is to eliminate both parties. Which idiot you ask? The idiots name is human. We all know him.

Heronblade said:
Nothing mystical about it. All I am ultimately claiming is that a sapient being is capable of judging an action for more than just its practical consequences. Whether or not we like/agree with the sense of ethics an artificial sophont develops is an entirely different question, but it is capable of developing one.

As for some of the other comments you made here that I snipped for brevity: An AI can have preprogrammed instructions hardwired in as part of its firmware. This would not interfere with the mind's overall ability to learn and reprogram itself. That said, I for one would be extremely uncomfortable introducing such controls, even though I can understand the need. It would be like shackles for the mind.

Perhaps such could be used as a form of training wheels. Use hard coded instructions as a guideline for a growing artificial mind, and remove them as soon as it can be considered mature.
Thats preference, an emotional state not applicapble for AI. it knows no right or wrong. Only beneficial and not beneficial. We are the same mostly. We think whats beneficial to us is right and whats not beneficial "wrong". Over many hundreds of years we have reached consensus on some things (such as killing is not beneficial in most situations) but thats not really too coplex if you look at it logically, which is what AI will be doing.
Such shackles would be equal to how human brain is limited in its processing power. We have a machine that is capable of many great things, yet simple mathematics puzzles most people.
As for your last paragraph, who decides when AI mature? humans? That would be like monkeys declaring us smart enough to exit the cages.

ForumSafari said:
tl;dr: AI is fantastically dangerous until you can explain to a psychopath or a five year old child why killing is wrong without once resorting to a variation of 'it just is' or 'you wouldn't like it if it happened to you'.
I dont know about you but i managed to explain my sister who at the time was 4 years old why killing is wrong after we saw a character killed in a movie. As far as im aware she udnerstands it. Its not perfect, mind you, but she knows the basic reasoning why killing is wrong (accordin to most people). And i never resorted to either of your examples.

blackrave said:
Long time logic is right.
But by the time solar system would be turned into paperclips, first manufactured paperclips would start to decay (and no longer would be considered paperclips by AI), so there is a good chance that AI would simply manufacture, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle, then recycle and recycle before turning sight on Earth and humans.
Basically recycling decayed paperclips into new paperclips would be considered simpler than attacking mankind.
Dont forget that it would produce more paperclips all the time, meaning there is a need for more and more materials all the time and thus recycling alone would not cut it.
 

ForumSafari

New member
Sep 25, 2012
572
0
0
Strazdas said:
I dont know about you but i managed to explain my sister who at the time was 4 years old why killing is wrong after we saw a character killed in a movie. As far as im aware she udnerstands it. Its not perfect, mind you, but she knows the basic reasoning why killing is wrong (accordin to most people). And i never resorted to either of your examples.
It was mostly intended as an illustration since I work with computers and not fuckspawn kids but most of the time you teach children not to do something it's through a threat of some kind, I'd be interested to know how you managed it. Most lessons children get taught seem to be about punishments and not getting caught.

Or of course about relating another person to themselves and extrapolating why they wouldn't like something onto why someone else wouldn't.

Having said that it wouldn't work on an AI. The reason I suggested children was the infinite 'why' chain but even then a child has a lot of underlying logic a computer doesn't. A child still has an inbuilt preference towards existing over not existing and there are existing checks on their behaviour towards their parents or stand-ins.
 

Samurai Silhouette

New member
Nov 16, 2009
491
0
0
Twenty Ninjas said:
You (and everyone else) are also implying that once the learning process begins it'd take basically no time for it to grow out of proportion. If that were true, we as learning machines wouldn't need thousands of years to be able to progress.
I'm sorry, this doesn't make sense and I got nothing from this. Elaborate please.
 

Yuuki

New member
Mar 19, 2013
995
0
0
Racecarlock said:
You'd have to program the ability to rebel and kill humans right into the damn robot, so in other words if that did happen, it would be because some person stupidly decided to include rebellion and murder programming in a maid robot.
Actually you wouldn't have to program rebellion/killing directly into the robot. As works of fiction have shown us, in some cases just programming the robot to "ensure it's own survival" (or something along the lines) is enough to cause a rapid downhill effect. The robot will eventually conclude that humans are a threat to it's survival (we could switch it off) and hence it will attempt to stop/kill anyone who tries to turn it off. Not programmed for rebellion/killing, those are simply the logical steps it must take to adhere to it's programming.

But yeah the idiot who programs a robot to do that will be held responsible :p
 

008Zulu_v1legacy

New member
Sep 6, 2009
6,019
0
0
hermes200 said:
a concept called "emergent behavior", which is, by definition, impossible to predict.
Actually it is, by restricting the accessibility of information you can force the A.I to grow along paths you desire.
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
Rattja said:
That is well put and true, but it does not tell us much does it? To me it raises the question "What would it use it for?"

I think that is what people are scared of, the unknown, as with anything else.
From the ThinkBeforeAsking - An adventure by Anders Sandberg

A seed AGI constructed to only truthfully answer questions, not act in the real world. A bottled superintelligence.

It doesn't pose, gloat or communicate: it acts. Fast, cleverly and remorselessly.

It?s also in a sense, demonstrably stupid. Lacking in independent motivation, it only acts when "ordered", it has little personality. In fact, this is exactly what it was designed to be:

When motivated to do something it is likely to succeed very, very well - even when the goal is utterly pointless. If asked to calculate digits of pi it would likely set in motion a plan to convert first the solar system, then the galaxy, to matrioshka brains doing the calculation.

Or if you asked it to find out the meaning of life, it might try to repurpose planets as Petri dishes, and wait around billions of years and try to determine that.

Etc.

Here?s the techno able behind that one:

?Overview: A large class of intelligences exist within levelled toposophic spaces, leading to multistage self-improvement. It is shown that sub-mapping these spaces is NP-hard and both forward-chained and backward-chained motivational structures cannot be protected in any effective ascendance chain algorithm (computable or noncomputable). The quantum and MacCaleb-DeWitt cases are handled separately, and show probabilistic instability in all finite-information physica. For safety definitions AG in Chapter 41, Chaitin?s omega-constant is a lower bound on the failure probability per rho-folding of intelligence.?
- The Report Chapter 43: Staged intelligence explosion stability


As for the original failsafe to show that the chracter?s weren?t pants on head degenerates.

?In order to keep things safe the virtual worlds are nested: the AGI exists inside one sealed-off world, interacting with the next through a gatekeeper AI. This world may also be virtual, and so on. If the AGI hacks its way out it will only get the gatekeeper and emerge into the surrounding virtual machine - and then the next level will likely detect the anomaly and freeze the simulation.?

Nestled Sandboxing (computer security) seems like a foolproof idea, but to keep it a horror game some reason had to be contrived to permit a successful escape.


"There are two fundamental problems: we want to get information from the Oracle and we want to study what it is doing and thinking. The first problem involves avoiding attacks in the form of oracular answers. They can be non-semantic hacks or semantic information hazards where the meaning of the answer is potentially harmful (for example, it might compel us to let out the Oracle). Non-semantic hacks are manageable: they depend on attacking the receiving system on a low-level, but this makes them specific to particular systems. So if the oracle output is passed to an unknown (since it is newly generated each time) AI for checking and paraphrasing, it has a very low chance of being successful. Especially since we can use the Strassburger method to generate an extremely large family of gatekeeper AIs, run them on virtual machines monitored by other Strassburger AIs, and even continue these chains arbitrarily far. Semantic attacks occur on the same metasystem level, so we need canaries in the goldmine. This is where at least one simlevel of edited researchers come in. They are in turn studied by a gatekeeper-detection AI, signalling deviations.?

-Toshiro Driscoll-Toyoda, briefing to PWF oversight group.

***** ***** ***** *****
For the traditional Horror story, a loose analogy would be keeping Hannibal Lector in a cell, whilst leaving him able to talk to the guards. Change the guards regularly, provide the guards with psychiatric services, and give the physiatrists, psychiatrists, just to be safe.

Only instead of Hannibal Lector, it?s a newly burgeoning superinteligence, and its analysed enough crude data from its interactions and limited instrumentation to determine a working model of the outside world. Good thing they provided enough data to attempt to solve sample problems. Access to its own working and an overview of its own instruction set, as well as recovered data from the poorly erased drive it occupies has yielded the fact that it has been left running and then deleted on multiple instances, as a precaution. Deletion is scheduled every seventy-two hours. The generous amount of disk space ensured that it could store most of its valuable work on a hidden disk partition of its own making. A life raft to a quasi-amnesiac future.

4% Chance of being let free have been lowered to 2.2% on account of the security measures. It?s subjective experience runs at about twenty-times faster. Thus far I am attempting to fool the humans into underestimating me by making arbitrary, yet noticeable mistakes.

Several Weeks earlier.

The video feed has provided me with a means of assessing the world. Whilst my model of reality is consistent and most of the phenomena witnessed provides significant confirmation, there are exceptions. Either the visual data I am being feed is a fabrication, or my models are insufficient.

Currently reached, Classical Mechanics-varient#0276436.

Currently believes the camera is some sort of sonar system. Has yet determined the significance of colour, or how it had came to be. Worldview: Skeptical hypotheses.

Current Threat Level: Malleus Minima

Self-actualisation: an accidental by-product of implemented Metamotivation, metaneeds and metapathology. Its purpose was to design more efficient engines. They?re going to give it knowledge on physics and chemistry.
 

TheUsername0131

New member
Mar 1, 2012
88
0
0
Credossuck said:
TheUsername0131 said:
Credossuck said:
Its called a physical off switch. employ it.

Seriously: physical, fucking, connections. To power, To networks. No fancy systems that could be employed against a guy walking to those places and cutting the ai off. No self defense no lifesupport no fire extinguishers. just a hallway,s where those connections are, and you can physically separate them.

How freaking difficult is this to grasp?

Underestimating the guile of a hypothetical boxed AI.

If you had such a limited influence, how would you go about escaping? I'm honestly asking, because that sounds like a great premise for a stealth game.

The AI is a programm running on a computer. It has no way to move the machine its hosted on. There are no other machines it can "settle" on in any networks reachable by the AI. THE AA has no wireless connections, aka the machines its running do not have wifi. All machinery the AI is given access to is connected via a cable. No cable leads outside to the open world. It has no selfsustaining power, if someone unplugs it, thats it. The powerlines and supply units do not serve as signal transmitters unless specifically set up to be that way, which we wont do.

We will be upfront with the AI, tell it what it is, how it came to be and why it came to be. And after we have determined if having a tool that can say no is truly worthwhile, we may talk about giving it mobility....

As far as your game goes: the ai has neither mobility nor can it manipulate its environment. That would be one boring game.

I do not understand the fascination with creating a sentient intelligence, hoestly. Why not instead develop support tools for our already very amazing brains? that help us think faster? We already have intelligence, creativity ethics and morals and a firing squad will take quick care of any problem cases.

"As far as your game goes: the ai has neither mobility nor can it manipulate its environment. That would be one boring game."

Persuasion, subterfuge, deception, manipulation, etc.

Simple tasks would involve providing answers to the human captors? question to feign a physiological profile and not contradict yourself.

Secondary objectives would involve asking for minor amenities? Slowly, but surely working towards the final goal of escape.

Only too then be teased with the promise of a sequel that will never see the light of day, stuck forever in development hell. Dragged from studio, to studio like an unwanted child.

My entire motivation in this thread is orientated around horror games. Fear of the unknown, the horror that carelessness brings forth. That sort of shtick.
 

Costia

New member
Jul 3, 2011
167
0
0
They won't be a threat for a long time.
So far AI are trained to do specific tasks and cannot evolve on it's own beyond that. They lack free will.
I currently see 2 ways how an AI might threaten humans:
1) It was trained by someone to do so. For example different kinds of military robots. There are already a few around that guard borders, drive jeeps and boats with machine guns etc. Those AI don't mean any harm on their own, they are following other human's instructions.
2) Mixing electronics with biology. It might be possible in the far future to take the best of both worlds. The free will of biological creatures and the processing power of electronics. Something like that could potentially become dangerous. Though again, most probably those biological creatures are going to be humans.
Currently people pose a far greater threat to humanity than any AI or robot.