Poll: A.I. Should it be invented or not?

Recommended Videos

Henkie36

New member
Aug 25, 2010
678
0
0
This is a difficult discussion. On one side, it could be very awesome and useful. Of course, a machine which think for itself and which you won't have to tell what to do every five minutes is practical.

On the other side, it could go horribly wrong, The Matrix style. Machines overthrow us, we have to fight the machines.

Since I don't trust humanity (inclusing myself) with great power and dangerous technology I'm going to have to say... No this should not be invented.

EDIT: No, AI is not already invented. What I mean by this AI is a program which can genrate its own logic. I don't mean like the stuff in games, where enemies actually shoot at you, and not at the door you just came through. This has been pre-written into it, and is simply a result of clever programming. If it can genrate solutions for problems we didn't write into it, it's AI. Like if your calculator would be able to solve grammar problems.
 

scrubnpuff

New member
Jun 11, 2011
28
0
0
Wait a sec... wasn't AI already invented? I understand you mean we shouldn't have robots or things like that with AI, but the PS2, with its AI-centric games, was supposed to launch missiles. I played a TON of PS2 and my house didn't blow up yet.
 

burningdragoon

Warrior without Weapons
Jul 27, 2009
1,935
0
0
Well AI is already invented and will only progress more and more as time goes on. /nitpicky

You of course mean should we progress AI technology to the point of skynet-esque levels. That is a very, very, verrrry long ways away. Assuming it's even possible, it's not something anyone alive today is likely going to have to worry too much about.

So... yes, I guess is my answer?
 

FalloutJack

Bah weep grah nah neep ninny bom
Nov 20, 2008
15,489
0
0
scrubnpuff said:
Wait a sec... wasn't AI already invented? I understand you mean we shouldn't have robots or things like that with AI, but the PS2, with its AI-centric games, was supposed to launch missiles. I played a TON of PS2 and my house didn't blow up yet.
I can explain, O knowledge-seeking individual. The AI classification is...not in keeping with what we've made. Artificial intelligence is suppose to be like HAL 9000, C3PO, the WOPR from Wargames, and so on. We're talking real and growing intellect with a personality. What we have is sophisticated, but the AI isn't quite at the sci-fi level yet. Last I checked, we have a learning machine which will gradually become better and better, but do we have the robot-equivalent of the human being, the living intelligence? Not yet. Getting there.
 

scrubnpuff

New member
Jun 11, 2011
28
0
0
I think people would use the advanced AI for ultra sex simulations before they make killing machines with it, so there truly isn't much to worry about. Unless... one of those sex sims goes rogue... the boner cruncher? There's a Terminator sequel, right THERE.
 

FalloutJack

Bah weep grah nah neep ninny bom
Nov 20, 2008
15,489
0
0
scrubnpuff said:
I think people would use the advanced AI for ultra sex simulations before they make killing machines with it, so there truly isn't much to worry about. Unless... one of those sex sims goes rogue... the boner cruncher? There's a Terminator sequel, right THERE.
Hey, wasn't that a plot in a Ghost in the Shell movie?
 

Da Orky Man

Yeah, that's me
Apr 24, 2011
2,107
0
0
Definitely. Despite mankind's technology, the world still exists. We haven't nuked it yet. We will have to greatly control it, like NEVER giving it internet access, but the help a true AI could bring could save mankind.
 

Innegativeion

Positively Neutral!
Feb 18, 2011
1,636
0
0
I honestly find sci-fi horror rampant AI scenarios to be extremely far-fetched. Just as humans tend to lean on the non-murderous rampage side most of the time, I feel as though sufficiently advanced AI that learns most of its knowledge on its own would likewise value life, especially if it's programmed to be sympathetic; something that shouldn't be too hard to program after we've mastered learning.

Anyway, I prefer the megaman scenario: machine and man living together in harmony. We just gotta watch out for this guy:
 

RedMore Trout

New member
Jul 29, 2011
38
0
0
Yes, science should always be progressing foward. No, law should ever prevent learning or creating. If you are worried about how a certain scientific creation will be used, then ban the exact use that you fear. not the research that lead up to it.

Well that turned out really awkard, hopefully you can make some sense out of my ramblings.
 

NinjaDeathSlap

Leaf on the wind
Feb 20, 2011
4,474
0
0
In short... no.

In full... No. because if we ever invented machines that could think, feel, and learn all by themselves we would also have to develop measures to restrict their ability to communicate with each other in ways that we cannot (like preventing internet access as someone mentioned earlier). I don't know about you but I'm pretty sure the machines with their own sense of free will would not take kindly to us holding back their potential to evolve by themselves.

This would not end well. Not necessarily WW3, but it would not end well.
 

IBlackKiteI

New member
Mar 12, 2010
1,613
0
0
Have a form of sentient machine intelligent rule over us instead of our own silly leaders? Fuck yes.

I wanna be ruled by some god machine instead of other dumbasses, like Deus Ex Helios style.
 

CrystalShadow

don't upset the insane catgirl
Apr 11, 2009
3,829
0
0
Innegativeion said:
I honestly find sci-fi horror rampant AI scenarios to be extremely far-fetched. Just as humans tend to lean on the non-murderous rampage side most of the time, I feel as though sufficiently advanced AI that learns most of its knowledge on its own would likewise value life, especially if it's programmed to be sympathetic; something that shouldn't be too hard to program after we've mastered learning.

Anyway, I prefer the megaman scenario: machine and man living together in harmony. We just gotta watch out for this guy:
Funny really that the OP mentions the matrix, but according to the backstory, the AI in the matrix is actually very benevolent, and it's the humans that are total dicks...

The AI basically tried to do things peacefully, but the humans refused to cooperate at all, so they did what they had to in defending themselves...

Even then, they created the matrix rather than just kill every human they found... (The 'battery' idea is just stupid.)
 

RagTagBand

New member
Jul 7, 2011
497
0
0
No, No, No. Its late and I need sleep but rest assured there are a multitude of reasons why creating sentient machines (keeping with the common use of AI) should never be created.
 

mitchell271

New member
Sep 3, 2010
1,457
0
0
The only version of AI that I can think of that we could use is the kind from Halo a la Cortana (and Deja if you've read The Fall of Reach). If we make so called dumb AI (AI that are programmed entirely from scratch and made specifically for one purpose; e.g. controlling farming machines) and learn to control them while we progress, we could create fail-safes, sub-routines that would detect if what the AI was doing was against its own, unchangeable, main function.

Once we have those down and triple-checked, we could fully map the human brain, understand every single node and electrical signal that it sends and receives, and create a smart AI, created from a human mind. This one would be able to learn, to adapt, to perform any task it wished; it would be, essentially, a human mind in a computer.

This may seem like I'm having delusions of grandeur, but I think humanity might have a shot at this if it takes the proper precautions.
 

BonsaiK

Music Industry Corporate Whore
Nov 14, 2007
5,635
0
0
I thought I'd ask Cleverbot about this. Here's what I got:

BK: Hello, Cleverbot.
CB: You're not so clever.
BK: Gee thanks.
CB: Hello kaitlyn.
BK: I'm not Kaitlyn but whatever. I wanted to ask you something.
CB: Ask me something then?
BK: Do you think AI should be invented?
CB: Not if you're the result.
BK: I think you're the result, actually. Are you okay with that?
CB: Well, it is kind weird I think.
BK: I would tend to agree. However, do you think it's a good or a bad thing?
CB: Well I do have a personality. Why do you ask that?
BK: Just curious. You seem to be evading the question.
CB: Which question?
BK: Do you think AI should be invented?
CB: Uh, sure... I guess. As long as super-intelligent robots don't take over the world.
BK: Thanks for your time.
CB: It's my pleasure to be a help for you if I can.