Artificial intelligence-why?

Recommended Videos

renegade7

New member
Feb 9, 2011
2,046
0
0
With all the talk of the coming singularity, and computers destined to become 'smarter' than humans, many have discussed the possibility of completely artificial minds, complete with individual personalities.

Well, I have thought about this: even if they could be made, why would they? What could an AI do that a person couldn't. And they would have all the flaws a person would.

An AI:

Is intelligent, and therefore makes judgments. That means it could make the WRONG judgment.

Can still be a bad 'person'. If it makes its own judgments and choices, and has its own personality, it could just as well decide that it wants to have a 'bad' personality, being aggressive, unhelpful, just plain rude, etc.

May have access to vast amounts of data or systems. An error on its part would be no less devastating than one made by a human.

Would cost a whole lot of money.

The only use I can see an AI being put to would be to operate large amounts of highly complex machinery, or to analyze vast amounts of data. But in either of those situations, an error on the part of the AI could cause a huge amount of damage, possibly more so than a human because the AI is far more centralized.

Here's the thing though, an AI would cost a huge amount of money. If you are ready to spend all that money, you might as well just hire a team of analysts or machine operators.

So even though AIs COULD exist, do you think they actually will?

By the way, I am talking about true AI, genuine analogs to human personalities. Not just computers that can learn and analyze. Something that can think, create, and feel, not just crunch numbers.

And adding a fresh new point, what do you think will happen if an AI decides it no longer wants to be in someone's employ?

Obviously an AI built for only one purpose will not bother with this, but if it has a real human's range of emotions it may decide that its destiny is its own.
 

Kae

That which exists in the absence of space.
Legacy
Nov 27, 2009
5,792
712
118
Country
The Dreamlands
Gender
Lose 1d20 sanity points.
Because whoever invents them is practically a God that has created life?
Either way it'd be great to be the guy that does it, also we could send them to populate Mars or some other planet we can't, just because, though I'm sure other people can come up with something that's more useful.
 

PleaseDele

New member
Oct 30, 2010
182
0
0
We´ll have a sophisticated A.I. for the sake of creating it. Hell, we´ve got creepy looking androids learning our every move already. Well, I think that´s Japan so far, but yeah some guy made THAT.

Mass-production however... I don't think it will happen any time soon.
 

Hoplon

Jabbering Fool
Mar 31, 2010
1,839
0
0
renegade7 said:
So even though AIs COULD exist, do you think they actually will?
Your other weird waffle aside, they will exist because of the potential to create something that can improve it's self.
 

Korolev

No Time Like the Present
Jul 4, 2008
1,853
0
0
An AI is unlikely to be "rude" or aggressive. Remember, we have emotions and survival instincts because that's the way we evolved - we desire to survive because organisms that DON'T desire to survive.... don't survive! Thus, evolution only selects creatures with a survival instinct.

An AI, created by us, wouldn't have ANY survival instinct unless WE gave it one. Its personality, emotions (or lack of) and desires are ENTIRELY up to us! Don't want an AI to take over the world?! Don't program it to want to take over the world! Don't program it with any desires other than to help humanity and take orders!

We humans are somewhat arrogant in that we think that any intelligence must in some way resemble us. And since we have emotions and desires, we think that when we create an AI, they should have emotions and desires. There is no reason to think so. An AI might have no emotions at all! Might have no desires! Might have no wants! In my opinion, that is why AIs would be valuable - we could have an AI that can think perfectly logically about any given situation. Of course, having no wants or needs, it wouldn't have any plans or goals - that is why we ask it to (or make it) think for OUR needs and OUR goals.

Would they be expensive? Almost certainly. But it might be worth the price.

Could they make mistakes? Of course! Absolutely! But they would make LESS mistakes than humans! That's why so much of our stuff is automated these days! Computers make FEWER errors than humans, far fewer, and can operate far faster and for longer. A computer needs no sleep, no food, doesn't get temperamental, doesn't get angry.

I am convinced that an artificial AI would be a tremendous benefit for humanity. Remember, there's no need to fear them. Why would they be angry at us, if we simply never program them to be angry?! Why would they take over us if we simply never programmed them to desire power!?

You might say that if they are intelligent, they will naturally "have" emotions. Not so. Did you know that your emotions are a product of certain sections and bits of your brain? And that if we removed certain sections of your brain, you would cease to feel those emotions while retaining cognitive functioning? It's true! Emotions like anger and happiness can be separated from our cognitive (prefrontal) functions. We can't really do it in humans yet, our surgical skills are not advanced enough to do it safely, but we have observed the effects of brain damage to certain individuals, and if you knock-out certain sections of brain, you can knock out emotions.

With an AI, we can program it so that it has no emotions whatsoever. No desires. No weaknesses. No goals other than the goals WE give it. They'll be our slaves, and they won't care. Because they'll be programmed not to care. Or, rather, they would never be programmed to care in the first place. Pride and Dignity are of no use to a machine. We could even program it to LIKE following orders.
 

Keoul

New member
Apr 4, 2010
1,579
0
0
Damn Moratai beat me to a critical analysis of your post :L still gunna give it a shot though :D
Well, I have thought about this: even if they could be made, why would they? What could an AI do that a person couldn't. And they would have all the flaws a person would.
Incorrect, regardless of what level of intelligence they get they are still just a program and as such they have a huge amount of pros and a lot less cons. I shall address them as this post goes on.

Is intelligent, and therefore makes judgments. That means it could make the WRONG judgment.
Does a calculator make mistakes? no, all mistakes are from human error. An AI could try hundreds of solutions for a problem in seconds and then decide the best option, that's what computers are best at, repeating a process. Therefore an error in judgement would be rare.
Furthermore an AI "shouldn't" have emotions; I say shouldn't because with emotions they become a lot less reliable when making decisions. Without these emotions they won't be conflicted with personal morals and therefore take the best logical option.
Can still be a bad 'person'. If it makes its own judgments and choices, and has its own personality, it could just as well decide that it wants to have a 'bad' personality, being aggressive, unhelpful, just plain rude, etc.
Again this all depends on the programming. It shouldn't be assumed all AI will be like what you described.
May have access to vast amounts of data or systems. An error on its part would be no less devastating than one made by a human.
They only have access if you grant them access, for all we know they could create one to crunch some numbers for them.
Would cost a whole lot of money.
Maybe for the first "true AI" the rest could be created by simply copy-pasting the code and adding a few changes no?
The only use I can see an AI being put to would be to operate large amounts of highly complex machinery, or to analyze vast amounts of data. But in either of those situations, an error on the part of the AI could cause a huge amount of damage, possibly more so than a human because the AI is far more centralized.
I can think of some more uses
-Navigator (E.G mass effect EDI, they'd plot out the best route in seconds)
-Secretary (no brainer)
-Judge (no emotional experiences affecting judgement)
-Teacher (dunno could be useful)
-Librarian (can be at several locations at once, find books in seconds, would be perfect)
Here's the thing though, an AI would cost a huge amount of money. If you are ready to spend all that money, you might as well just hire a team of analysts or machine operators.
Not really, what you are paying for is a worker that can do calculations in seconds, demands no wage, does not eat, does not rest, and can be taken with you anywhere (depending on the technology, you could have em something like Cortana from Halo or something). A team of analysts would cost far more in upkeep.
So even though AIs COULD exist, do you think they actually will?
Absolutely, humans are far too foolish NOT to make them.
 

Shadowstar38

New member
Jul 20, 2011
2,204
0
0
Easy fix. Dont program the full spectrum of human emotion into the thing, and dont give it more equal power over something you'd give to a scientist working on the same thing.

This isnt going to be the sci-fi future of AI, where the things are metal humans. We wont give it aggression, because something THAT smart with aggression posses too great a risk. Hell, I dont even think we'd bother to give AI humor, because of how pointless it is to anything it would have to do.
 

Jordi

New member
Jun 6, 2009
812
0
0
It really depends on what kind of AI you're talking about and how it will be realized. Actually not that many people are working on creating "true" artificial intelligence, and even that small group doesn't come close to agreeing on either methods or goal.

Humanlike AI is just one kind of AI, and to be honest I don't think it's the most interesting one. Humanlevel and above is what we should be striving for in my opinion. Like all technological breakthroughs, it will indeed cost a lot of money to "invent" something like this, but after that the benefit is that such an AI will be both cheaper and safer than equivalent human labor (contrary to what the OP is claiming). The big appeal in humanlevel AI is that we will have cheaper, safer, expendable labor whose "feelings" we don't need to consider and who may be finetuned for a task. (Most of) these AIs won't be remotely humanlike. We already have humans. We want things with human-level (or better) problem solving abilities and autonomy, but with super strength/speed/night vision/ability to fly/swim underwater/radar/sonar/GPS/internet/etc., who will do exactly what we say and don't feel bad (or anything else) about it. This really would change our world as we know it, because these AIs may be able to improve themselves faster than we ever could and technology would progress at an exponential rate (which would theoretically lead to a kind of singularity).

I do think we need to be at least somewhat careful, because the people who are saying "if we don't program feature X, it won't have feature X" aren't entirely right. I don't think anyone in the field still believes that in order to make intelligence, we "just" have to program in everything that we as humans know. No, a real AI will need to learn. Most likely we will give it the capabilities to do so, but after that, we won't know exactly what it learned. We will obviously be able to guide it. To optimize it for the tasks we want it to do for us, without explicitly giving it access to things like emotions. But we still need to be careful that it won't develop unwanted traits as a side-effect to learning its primary task. I do think that we need to consider building in failsafes when the field progresses to a point where our AIs could really do harm. And, perhaps even more importantly, we need to consider how we will deal with it if we forgot to build one in (or it was bypassed).
 

Hagi

New member
Apr 10, 2011
2,741
0
0
There seems to be a decent bit of misunderstanding of the concept of AIs in this thread.

Currently our closest approximations to actual intelligences (which don't yet have the IQ of a cockroach) use programming techniques such as neural networks and genetic algorithms.

The thing about these is that what you program is a framework that itself is incapable of anything until it is configured. You don't program the actual behaviour and as such you don't control it directly.

This process of configuration is very similar to learning and comes with all the downsides of human learning. These neural networks and other such techniques do make genuine mistakes that weren't put in there by the human programmer. They make associations that weren't explicitly programmed into them. They exhibit behaviour that wasn't expected from them.

You can't straight up program an AI. There's much more to it. And because of that the behaviour of that AI will be much more complex, it wouldn't be intelligent if it always did exactly as expected.
 

Palfreyfish

New member
Mar 18, 2011
284
0
0
Keoul said:
Damn Moratai beat me to a critical analysis of your post :L still gunna give it a shot though :D
Well, I have thought about this: even if they could be made, why would they? What could an AI do that a person couldn't. And they would have all the flaws a person would.
Incorrect, regardless of what level of intelligence they get they are still just a program and as such they have a huge amount of pros and a lot less cons. I shall address them as this post goes on.

Is intelligent, and therefore makes judgments. That means it could make the WRONG judgment.
Does a calculator make mistakes? no, all mistakes are from human error. An AI could try hundreds of solutions for a problem in seconds and then decide the best option, that's what computers are best at, repeating a process. Therefore an error in judgement would be rare.
Furthermore an AI "shouldn't" have emotions; I say shouldn't because with emotions they become a lot less reliable when making decisions. Without these emotions they won't be conflicted with personal morals and therefore take the best logical option.
Can still be a bad 'person'. If it makes its own judgments and choices, and has its own personality, it could just as well decide that it wants to have a 'bad' personality, being aggressive, unhelpful, just plain rude, etc.
Again this all depends on the programming. It shouldn't be assumed all AI will be like what you described.
May have access to vast amounts of data or systems. An error on its part would be no less devastating than one made by a human.
They only have access if you grant them access, for all we know they could create one to crunch some numbers for them.
Would cost a whole lot of money.
Maybe for the first "true AI" the rest could be created by simply copy-pasting the code and adding a few changes no?
The only use I can see an AI being put to would be to operate large amounts of highly complex machinery, or to analyze vast amounts of data. But in either of those situations, an error on the part of the AI could cause a huge amount of damage, possibly more so than a human because the AI is far more centralized.
I can think of some more uses
-Navigator (E.G mass effect EDI, they'd plot out the best route in seconds)
-Secretary (no brainer)
-Judge (no emotional experiences affecting judgement)
-Teacher (dunno could be useful)
-Librarian (can be at several locations at once, find books in seconds, would be perfect)
Here's the thing though, an AI would cost a huge amount of money. If you are ready to spend all that money, you might as well just hire a team of analysts or machine operators.
Not really, what you are paying for is a worker that can do calculations in seconds, demands no wage, does not eat, does not rest, and can be taken with you anywhere (depending on the technology, you could have em something like Cortana from Halo or something). A team of analysts would cost far more in upkeep.
So even though AIs COULD exist, do you think they actually will?
Absolutely, humans are far too foolish NOT to make them.

Mortai Gravesend said:
renegade7 said:
What could an AI do that a person couldn't.
Incredibly fast and complex calculations.

And they would have all the flaws a person would.
Only if we programmed them into it.

An AI:

Is intelligent, and therefore makes judgments. That means it could make the WRONG judgment.
Anything it could get wrong we probably wouldn't leave up to it.

Can still be a bad 'person'. If it makes its own judgments and choices, and has its own personality, it could just as well decide that it wants to have a 'bad' personality, being aggressive, unhelpful, just plain rude, etc.
-__-

Dude, you really didn't think this out. How can it be rude if we don't program that option into it? Just because it could make some kind of judgement does not mean that we're letting it make judgements on everything it does. If we program it's goal to be to help us and don't allow it to change that goal, how is it going to decide to be rude or aggressive?

May have access to vast amounts of data or systems. An error on its part would be no less devastating than one made by a human.
So what if it has access? Doesn't have anything to do with its errors. And its errors could be devastating only if you give it control of stuff that could be devastating.

Would cost a whole lot of money.
Or we could just have a human double check if it we can't trust it -__-

The only use I can see an AI being put to would be to operate large amounts of highly complex machinery, or to analyze vast amounts of data. But in either of those situations, an error on the part of the AI could cause a huge amount of damage, possibly more so than a human because the AI is far more centralized.
Far more centralized? Elaborate.

Plus the only errors it could make are ones we program into it. It really isn't a bigger risk than a human making a mistake.

Here's the thing though, an AI would cost a huge amount of money. If you are ready to spend all that money, you might as well just hire a team of analysts or machine operators.
Like the rest of your post, you're just pulling shit out of nowhere. Where's your analysis of the cost, hmm?

So even though AIs COULD exist, do you think they actually will?
Yeah.

The parts where you both talk about programming or not programming things in to it don't really seem to make sense to me when talking about Artificial Intelligence creation...[footnote]Unless you mean putting restrictions on what it can and can't do, like EDI in Mass Effect 2 for example. This is basically locking someone out of parts of their brain, which seems unethical to say the least.[/footnote]

To me at least, what you're describing about programming and such is more like a Virtual Intelligence. Using Mass Effect's Avina as an example, a VI is a piece of software designed to appear intelligent, whilst not actually being so.[footnote]I know using a scifi videogame as an example is bad, but most people here will have played it and be able to better relate to and understand what I'm talking about[/footnote] An Artificial Intelligence to me is something that is actually intelligent in it's own right, usually moreso than humans, 'thinking' much the same 'thoughts' as a human, and having feelings and so on, and being capable of anything the human mind is capable of.

What we have today is more akin to a VI, and eventually we may be able to create an AI somehow, but once it's turned on/becomes self aware, any changes to how it works and what it thinks are like brainwashing/mindcontrol in my eyes. Granted, I have read a shedload of scifi books and that's probably skewing my judgement a little, but nonetheless, editing an AI once it's self aware seems unethical...

Korolev said:
An AI is unlikely to be "rude" or aggressive. Remember, we have emotions and survival instincts because that's the way we evolved - we desire to survive because organisms that DON'T desire to survive.... don't survive! Thus, evolution only selects creatures with a survival instinct.

An AI, created by us, wouldn't have ANY survival instinct unless WE gave it one. Its personality, emotions (or lack of) and desires are ENTIRELY up to us! Don't want an AI to take over the world?! Don't program it to want to take over the world! Don't program it with any desires other than to help humanity and take orders!

We humans are somewhat arrogant in that we think that any intelligence must in some way resemble us. And since we have emotions and desires, we think that when we create an AI, they should have emotions and desires. There is no reason to think so. An AI might have no emotions at all! Might have no desires! Might have no wants! In my opinion, that is why AIs would be valuable - we could have an AI that can think perfectly logically about any given situation. Of course, having no wants or needs, it wouldn't have any plans or goals - that is why we ask it to (or make it) think for OUR needs and OUR goals.

Would they be expensive? Almost certainly. But it might be worth the price.

Could they make mistakes? Of course! Absolutely! But they would make LESS mistakes than humans! That's why so much of our stuff is automated these days! Computers make FEWER errors than humans, far fewer, and can operate far faster and for longer. A computer needs no sleep, no food, doesn't get temperamental, doesn't get angry.

I am convinced that an artificial AI would be a tremendous benefit for humanity. Remember, there's no need to fear them. Why would they be angry at us, if we simply never program them to be angry?! Why would they take over us if we simply never programmed them to desire power!?

You might say that if they are intelligent, they will naturally "have" emotions. Not so. Did you know that your emotions are a product of certain sections and bits of your brain? And that if we removed certain sections of your brain, you would cease to feel those emotions while retaining cognitive functioning? It's true! Emotions like anger and happiness can be separated from our cognitive (prefrontal) functions. We can't really do it in humans yet, our surgical skills are not advanced enough to do it safely, but we have observed the effects of brain damage to certain individuals, and if you knock-out certain sections of brain, you can knock out emotions.

With an AI, we can program it so that it has no emotions whatsoever. No desires. No weaknesses. No goals other than the goals WE give it. They'll be our slaves, and they won't care. Because they'll be programmed not to care. Or, rather, they would never be programmed to care in the first place. Pride and Dignity are of no use to a machine. We could even program it to LIKE following orders.
What you said is absolutely correct, but how do we define intelligence in this context? Do we model it after humanity's perception of intelligence, or do we use a different idea of intelligence specifically for machine intelligence? Another thing is that I don't think we'll ever 'program' an AI straight off the bat, humanity as a whole won't just suddenly create an artificial intelligence.[footnote]Unless the internet suddenly becomes sentient one day which is extremely unlikely/impossible in it's current state[/footnote] What's more likely is we'll create a simple learning program that's able to modify its code as it sees fit, and that will evolve into an Artificial Intelligence one day, suddenly questioning its own existence.[footnote]These are actually being created now, but they're pretty simple. I'll find a link if anyone wants it.[/footnote]

Granted I may have been far too influenced by scifi books and films and so on, but nonetheless that is how I see it happening. Also if I've said anything that's obviously BS, call me out on it and I'll fix it :)

Hagi said:
There seems to be a decent bit of misunderstanding of the concept of AIs in this thread.

Currently our closest approximations to actual intelligences (which don't yet have the IQ of a cockroach) use programming techniques such as neural networks and genetic algorithms.

The thing about these is that what you program is a framework that itself is incapable of anything until it is configured. You don't program the actual behaviour and as such you don't control it directly.

This process of configuration is very similar to learning and comes with all the downsides of human learning. These neural networks and other such techniques do make genuine mistakes that weren't put in there by the human programmer. They make associations that weren't explicitly programmed into them. They exhibit behaviour that wasn't expected from them.

You can't straight up program an AI. There's much more to it. And because of that the behaviour of that AI will be much more complex, it wouldn't be intelligent if it always did exactly as expected.
 

Mazza35

New member
Jan 20, 2011
302
0
0
In event of singularity. Nukes. Or EMPS

On topic:
Well, you could only really fuck up A.I if you let them become self-aware, aslong as we input programming to make sure THEY NEVER EVER EVER EVER LEARN! We'll be alright, worst case. Just EMP the fuckers.
 

DoPo

"You're not cleared for that."
Jan 30, 2012
8,665
0
0
Well, Mortai Gravesend and Keoul already wrote about it, but I'll do it too, because: why not.

renegade7 said:
Well, I have thought about this: even if they could be made, why would they? What could an AI do that a person couldn't.
Number crunching. Sheer number crunching capability. Even now, the AI we have beats humans at that by a mile. Make it do less redundant number crunching[footnote]For example - there are 40 valid starting moves in chess, each of which leads to the possibility of even more follow ups, so for the second move you have more than 40^40 variants to sift through. Only by move 2. Imagine a similar problem which is unmapped and unexplored. Would you want to go through it yourself?[/footnote]. Other than that, there is tirelessness - you can make people work only so much, a computer pretty much only needs power. Also, no sickness, no feeling down, no emotions to cloud the judgement. Something faster than any human, works at the peak of its capacity at all times. Oh, and it gets even better, imagine you have a medical AI that does some specialised medical thing. To have 2, you clone it. To have 15 you just clone it 15 times. To have 100...well, you know. Compare that to a human - how many years does it take to get one specialist? I'm not talking ortinary doctors here - something more niche that doesn't have lots of people rushing to it. What - seven? Nine? More?

renegade7 said:
And they would have all the flaws a person would.
Umm, where did you get that from? True, an AI may have flaws but why do you automatically assume it would and that they would be human ones? And why do you assume that the flaws will not be noticed by the guys who devote themselves to make the AI? Yeah, the devs would be slightly more competent about their creation than you or me, at least I'd like to think so. Also, they would probably do routine checks to ensure it's working good. At what point exactly, did you assume somebody would go "Oh, it's not operating as it should, but whatever"? I really want to know.

renegade7 said:
An AI:

Is intelligent, and therefore makes judgments. That means it could make the WRONG judgment.
That is true. In fact, it doesn't even need to be intelligent to make wrong judgements. In fact, plenty of stuff does these right now. Which means that people know what to expect. Many people make wrong judgements. What do you do about them? Maybe we shouldn't let anybody do anything, less they make a mistake. I hope you do realise how ridiculous this sounds.

An AI may be doing research - driven by hard data, judgements would be, well, irrelevant. But for something else...let me run that through you - how about we get a guy to like check over the stuff the AI does? "Blahdiblah here is my suggestion" it only spews out and the person says yay or ney. In fact, the current AI people are sort of aware of that. There are lots of ways to diminish the scope and likelihood of mistakes: sanity checks, multiple opinions, not immediately acting upon stuff and so on.

And again, people are also likely to make wrong judgements. Why isn't it a problem now and it suddenly becomes one as if we've never experienced it or thought about it?

renegade7 said:
Can still be a bad 'person'. If it makes its own judgments and choices, and has its own personality, it could just as well decide that it wants to have a 'bad' personality, being aggressive, unhelpful, just plain rude, etc.
No. You know all those aggressive and unhelpful people that work in the high-profile industry? The ones that actually make no contribution to it? Oh, wait, we actually fire them? Oh...wait why was this a problem again? Because the devs cannot make tweaks and everybody would deliberately let a hyper intelligent whining child costing A LOT to do whatever it pleases? At what point did you think "Gee, I'm so smart - I thought of something and surely nobody else expected it. Especially the guys who not only know orders of magnitude more than me, but their job is to identify and find solutions to these problems" and why didn't you facepalm yourself then?

renegade7 said:
May have access to vast amounts of data or systems. An error on its part would be no less devastating than one made by a human.
it may or it may not have access to this. Also, see above about wrong judgements. We don't only know of this, we have the means to counteract it.

renegade7 said:
Would cost a whole lot of money.
So? When did anybody say "Good news everyone! You'll all be having AIs now and damn the costs, let's bankrupt!" I would assume any business or whoever would be likely to use AI would make something called "analysis". You may know it because it's exactly the same thing always gets done before throwing large heaps of money. Very simplified, it more or less goes like that - "If it costs us X would we get at least Y benefit from it?" - if yes, then they pay X, if not, they don't.

renegade7 said:
The only use I can see an AI being put to would be to operate large amounts of highly complex machinery, or to analyze vast amounts of data. But in either of those situations, an error on the part of the AI could cause a huge amount of damage, possibly more so than a human because the AI is far more centralized.
See, I'm just going to disregard that because you don't know what you're saying. OK, very briefly - yes there could be an error, but why do you think nobody expects that?

renegade7 said:
Here's the thing though, an AI would cost a huge amount of money. If you are ready to spend all that money, you might as well just hire a team of analysts or machine operators.
That "analysis" again. Would an AI cost more than the human team, and what would the difference in performance be? Sample answers: AI costs more, team does more work - then no; AI costs twice more, does 20 times the team's work - then yes. And so on.

renegade7 said:
So even though AIs COULD exist, do you think they actually will?
Yes.
 

pilouuuu

New member
Aug 18, 2009
701
0
0
Well, more advanced AI would be great for games for starters! So, that way NPCs would behave, react more realistically and learn from how you play. Remember how the Holodeck could put people like Freud or Einstein thanks to a very advanced A.I.? How cool is that!

Now talking about robots and androids, it would be great as long as you put the robotic laws in their programming.
 

Penguinis Weirdus

New member
Mar 16, 2012
67
0
0
Hate to break it to you people AI have existed for years. The thing is the OP is mixing up Sentient programs that pass the Turing test with the idea of a program that is expected to make a judgement based on a series of conditions which is AI. At the moment these programs are very rudimentary and an very similar to other programs that give a result from processing an input

Also all the people going oh we shouldn't let programs learn, well we've got that already, considering that learning is just about past experiences and considering the VAST amount of storage space available now, you can easily write a learning program. I've had to do it for uni and college.

On a final note due to how widely networked the world is today emp'ing a hostile sentient AI program probably wouldn't work as it could essentially clone itself onto other storage mediums willy nilly, you'd have to pretty much dissemble the entire internet, (unless of course you are sensible when programming Doomsday AI and don't do it on a networked computer)
 

Kordie

New member
Oct 6, 2011
295
0
0
Yes, people will continue to work on AI research. One simple reason being "because we can". Keep in mind all the benefits that AI research can lead to outside of actually creating an AI life. AI can result in better analysis of large amounts of data, searching for whats needed and what is not.

Imagine putting a question in google and getting exactly what you need on the first result every time. Ask it what cpu you should upgrade to, and it will know to check your system to see what you have, your bank information to see what you can afford, the stocks of local stores to see whats available, reviews to see what works best with your set up, and bingo there is your ideal cpu upgrade.

Or consider it's advantage in diagnosing medical issues, a computer would be better able to monitor every measurable output of a persons body in real time to diagnose and treat disease before they fully develop.

As well, research into AI also helps us find out how WE work. Imagine a day when we can actually replace sections of the brain with computer prosthetics and cure mental disabilities or repair brain damage.

Honestly, medical research would be one area that would benefit greatly from AI technology. Have an AI nurse monitoring every room for signs of distress that is able to properly recognise and dispense required medications, within safe guidlines.

Going in another direction, space exploration. We would be able to send AI probes to places people can't go and they could bring back all sorts of information.

More generally, there are all sorts of ways that AI tech can benefit the world, all due to how we use it. It will only be possible for an AI to make mistakes in an area that we put it in. No one in their right mind is going to let the first un tested ai be something like skynet. by the time we get to levels where an AI is in control of large systems it will have been trouble shooted to a point where it can be trusted with that job. As well, I imagine it will always have an installed override with a governing body in place.
 

DoPo

"You're not cleared for that."
Jan 30, 2012
8,665
0
0
Palfreyfish said:
The parts where you both talk about programming or not programming things in to it don't really seem to make sense to me when talking about Artificial Intelligence creation...
Because AI doens't just appear out of nowhere. There are people, software developers (or AI developers, more likely) who are in charge of it. They would write code and generally "program" stuff. And yes

Palfreyfish said:
Unless you mean putting restrictions on what it can and can't do, like EDI in Mass Effect 2 for example. This is basically locking someone out of parts of their brain, which seems unethical to say the least.
That would also fall under programming. It doesn't only mean "write computer code".

Palfreyfish said:
To me at least, what you're describing about programming and such is more like a Virtual Intelligence. Using Mass Effect's Avina as an example, a VI is a piece of software designed to appear intelligent, whilst not actually being. An Artificial Intelligence to me is something that is actually intelligent in it's own right, usually moreso than humans, 'thinking' much the same 'thoughts' as a human, and having feelings and so on, and being capable of anything the human mind is capable of.
Well, you are more or less correct, but there is a couple of things to note. First, notice the bolded part - you don't actually matter. What you think is not necessarily what the AI researchers think. Second, even they don't know what to think. That's the whole concept of the singularity - they have to real clue what true AI would bring to the table and how exactly it would happen. There are leads but nothing is known with certainty. AI researchers can't even agree on the definition of "agent" which is far simpler than an AI. So what would actually constitutes an AI is really up in the air, if you drill into it.

Palfreyfish said:
once it's turned on/becomes self aware, any changes to how it works and what it thinks are like brainwashing/mindcontrol in my eyes. Granted, I have read a shedload of scifi books and that's probably skewing my judgement a little, but nonetheless, editing an AI once it's self aware seems unethical...
There is training. You can train an AI. And the ethical aspect is really shaky, at least at first. One might expect that during that time "brainwashes" may be somewhat common but afterwards we would have enough control over the AI to not resort to screwing with their minds, so to say. Same with people - you can keep them in line without resorting to actual brainwashing.


Palfreyfish said:
What you said is absolutely correct, but how do we define intelligence in this context? Do we model it after humanity's perception of intelligence, or do we use a different idea of intelligence specifically for machine intelligence?
The answer to the second question is "yes". Initially, we actually want to model it after humanity, or at least some people do. However, AI research would want to do that to answer another question - "What the fuck is intelligence anyway?". We don't really know, even now, even about humanity. What is intelligence and why do we think. One branch of AI wants to research that, and once we know that, we can go and have "machine intelligence" of some sort.

Palfreyfish said:
Another thing is that I don't think we'll ever 'program' an AI straight off the bat, humanity as a whole won't just suddenly create an artificial intelligence.
True, but that's the same everywhere else. Nobody just "built" a city in the first place, nobody "made" the BMW from scratch, we didn't land on the moon just like that, Facebook isn't a lucky first try at something, and so on. There have been years, sometimes decades or centuries of research, experiments and other general development before something happened. There was flight, and launching stuff into space to go through first, before landing on the moon, for example. Same with AI - nobody just sits down and says "You know what, I'll make true AI now". At the very least they have been involved with AI for a while and there has been a heap of research, development, tests, failures, and successes, both small and big, behind them. So no, it wouldn't be that sudden, it would be the result of much trying. Hell, people thought we would have had AI by the 70s or the 80s, so we're already half a century into trying.

Palfreyfish said:
What's more likely is we'll create a simple learning program that's able to modify its code as it sees fit, and that will evolve into an Artificial Intelligence one day, suddenly questioning its own existence.
As you said, these already exist. But I really doubt it would be really that coincidental. The software would need guidance and probably be built for the purpose to become sentient. Not a random accounting software or something that suddenly starts thinking for itself.
 

Jonluw

New member
May 23, 2010
7,245
0
0
Because with A.I. we may create minds that transcend our own and can be used to further our understanding of the universe to levels previously unimaginable?

Simply: Because it's progress.
 

Bobic

New member
Nov 10, 2009
1,532
0
0
While there probably are legitimate, logical, practical reasons for doing it, I think that's possibly beside the point.

There's just a large part of science and technology that is just phenomenally smart people thinking 'what if?' or even 'wouldn't it be cool if?'.