Poll: Sentient robots: Will they or won't they?

Recommended Videos

Zacharine

New member
Apr 17, 2009
2,854
0
0
Sentient? Yes.

Notice that sentience might not necessarily be true AI, merely highly adaptable, self-correctin machine. We already have practical applications based on fuzzy logic. I.e. machines and devices that slowly 'adapt' to changing circumstances.

Might not be too long before we have computer programs that re-organize themselves to work more efficiently based on the history of the user. For example, if you use image manipulation features, but not actual creating tools on a image manipulation program, it might reconfigure itself to work more slowly on the creating tools, improving it's performance of the manipulation tools.

On a final note, as it was said on the game Alpha Centauri:

"Begin with a function of arbitary complexity. Feed it values, 'sense data'. Then, take your result, square it and feed it back into your original function, adding a new set of sense data. Continue to feed your results back into the original function ad infinitum. What do you have? The fundamental principle of human consciousness."
 

Kevvers

New member
Sep 14, 2008
388
0
0
When you consider that humans are a lot like biological machines that are programmed to survive and reproduce, like all other biological life, then it seems quite likely we could develop an artificial analogue. I don't think they will ever develop a human-type intelligence, I think they will develop a machine intelligence. By which I mean, our perception of the universe was developed by organisms surviving through successive generations -- so we have a lot of hard-coded stuff like the hunting instinct and the survival instinct. It was very long process that took place over a very long time period. A machine consciousness will probably evolve out of the self-improvement of the existing machine intelligence, so I should think that a machine intelligence would regard the sanctity of life in much high regard than we do as it wouldn't have those animal instincts. I always imagine a machine intelligence as being somewhat like an extreme version of Dr. Spock. It would probably regard us as rather tragic ancestors, doomed to suffer and die in ways it couldn't possibly understand, and may decide to help us by helping us cure disease and prevent as much death as possible, and also try to stop us killing each other. I am a big fan of Isaac Asimov's novels, you can probably tell.

I think the bigger threat comes from unintelligent machines, like nano-machines. A simple error/mutation and we suddenly have microbe-like machine that are busy converting the earth's atmosphere into something unbreathable, or breaking down water into hydrogen and oxygen.
 

Lord Azrael

New member
Apr 16, 2009
125
0
0
The field of machine learning is devoted to this, it's ultimate aim is to get robots and machine intelligences to display emergent intelligence, that is an output that is greater than the sum of it's parts, like ant colonies, simple individual entities co-operating to achieve otherwise impossible goals (rather like cells in a human body). To model this there is a whole segment of robotics, called swarm robotics that use large numbers of simple robots, programmed with simple rules (i.e. turn towards light, back away from obstacle). These swarms are not intelligent per se but achieve remarkable things by developing emergent behaviours. Intelligence is defined in many different ways, but we have yet to instigate intelligent mimicry, i.e. curiosity, fear etc, the robot or machine may have a sense of self but is not sentient, it can be programmed with behaviours to simulate these things and these behaviours can be developed based on the robots experiences.

The method employed by machine learning is to evolve the simple rulesets the robot is initially programmed with into more complex behaviours, this can take many generations and the robot will undoubtedly generate non-useful behaviours along the way (like a child putting horrible things in it's mouth, it may decide that bumping into a wall is a good thing, although this is generally recognised by the robot as sub-optimal fairly quickly!), but through this autonomous evolutionary learning process machines can optimise their outputs in ways that far outstrip human attempts. e.g:

http://www.spaceref.com/news/viewpr.html?pid=14394

Sentient robots? Sooner than you might think!
Sorry, I've rambled on a bit here but then I am a roboticist!
 

Ben Legend

New member
Apr 16, 2009
1,549
0
0
I would forgive robots that become sentient, so long as they are similar to HK-47.

But, i hope robots don't become sentient. Or else were all screwed.
 

Lord Azrael

New member
Apr 16, 2009
125
0
0
The most basic form of machine learning is a greedy hill climber, that is an algorithm that mutates its ruleset and keeps the new muated versions if they are 'fitter' (better) than previously, the algorithms range from this simple hillclimber to some really horrifically complex things like Learning Classifier Systems, and Artificial Neural Networks. Wikipedia has some good articles so I won't bore you all to death!
 

Zombie_Fish

Opiner of Mottos
Mar 20, 2009
4,584
0
0
It could happen, but in turn it may not. I don't like predictions that much and usually try to ignore them.
 

Trivun

Stabat mater dolorosa
Dec 13, 2008
9,831
0
0
Eventually, I think so, yes. The rate at which technology has advanced in recent years means it's more than likely to happen provided the world doesn't end by then (North Korea are just insane, after all...). Look at it this way. Millions of years ago, we invented the wheel. It took us a while after that to invent the wagon, and even then we stuck with that for thousands of years, only developing the car in the late 1800's. Yet in a hundred and fifty years, we've made more technological advancements than we ever made in all the millenia before then. We put humanity on the Moon less than eighty years after inventing the motor engine. That should tell us all a little about the ingenuity of humanity. And since Charles Babbage's Difference Engine, the first computer, in the 1800's, we've developed greatly in computing since then too. The first computers that we use nowadays, their precursors were built in the Sixties or Seventies (I forget which). Look at how far we've come since then. Technology has never advanced more rapidly and it stands to reason that, given the research ongoing into robotic sentience and AI, it's likely we will definitely see sentient robots before the end of the century.
 

CuddlyCombine

New member
Sep 12, 2007
1,142
0
0
Trivun basically covers what I was going to say. Technology advances exponentially, which is great, considering humans are a very lazy species. As soon as it becomes feasible, I guarantee you somebody is going to program a computer with the directive of improving itself and streamlining the code it runs on. Then, provided the computer doesn't end up stuck in an infinite loop or something, it will probably run into sentience at some point.

I think we should just let the computers be sentient. It isn't like they can kill us, anyway. However, you're asking for trouble when you make legions of heavily-armed soldier robots and put them under the control of a supercomputer. So let's just stick with the robot-in-a-PC for now.
 

Guitar Gamer

New member
Apr 12, 2009
13,337
0
0
*facepalm*
dude.............................................does the name Skynet mean anything to you?
like I say; if it ain't zombies, it's the sentinent robots
 

Vanguard_Ex

New member
Mar 19, 2008
4,687
0
0
doomdiver_16 said:
Before I start I apologise if this has been done before but the search didn't bring up anything.

I have had this argument with my friends many times before. Will it ever be possible to create sentient robots? Now obviously it is impossible to perceive how it could happen with today?s technology but does that mean it will never happen?

If a robot created it will have to have programmed into it how to operate, how to act etc. Surely if it has all been programmed it will be impossible for it to act outside of this programming?

On the other hand we have the argument of advancements in technology. Many years ago people would be ridiculed for ever thinking some of the technology we have today may ever exist, for example in medieval times could anyone have ever have thought that people may be able to play something like pong, never mind the technologically advanced games we have today.

There is also the argument of a very complex series of random number generators to make all of it's decisions, with the generators varying in scale and bias depending on the decision, but could this truly be classed as sentient?

What are your thoughts on the matter? Will it be possible for humans to create robots that think for themselves or not?
That's what I always think...surely if a robot could only act through programming it can't do anything it isn't programmed to?
 

Guitar Gamer

New member
Apr 12, 2009
13,337
0
0
Dagodweezl said:
No we'll all be eaten by zombies before we figure it out.
not if we can help! will you help me build a fortress that is zombie proof? (basicly I need funding)
 

mooncalf

<Insert Avatar Here>
Jul 3, 2008
1,164
0
0
Sure. If a human operates under physical laws, then while our sentience may be some "x-factor", anything created with similar complexity would also qualify for the same consideration as being something similarly ineffable, and we know humans love to increase the complexity in their toys.
 

Eldritch Warlord

New member
Jun 6, 2008
2,901
0
0
Definitely, it's just a matter of time.

Also, something people don't realize is that human violence is caused by a collection of instincts. Primarily self-preservation and pack instinct. These are essentially programmed parts of human consciousness, an artificial intelligence wouldn't necessarily have these instincts. In fact it would probably have "absolute loyalty" instincts which would prevent violence against its masters. One more thing, humans only suppress their instincts in support of others.
 

Sayvara

New member
Oct 11, 2007
541
0
0
Sentient machines? I seriously doubt we'll be there within the upcoming 50 years, mostly because our current computer technology is vastly different from biological brains.

So no, I really don't think machines will become human-like... I think the opposite: humans will become machine-like. We are already on the verge of real human-machine interfaces... prosthetics like this one [http://www.youtube.com/watch?v=T6R5bm6qx2E] hints at how close we are to directly connecting ourselves to the machines. Input to optical and aural nerves and close too.

Extrapolating from this, I think that humans will be operating machines as parts of themselves far earlier than machines become sentient.

/S
 

dwightsteel

New member
Feb 7, 2007
962
0
0
My take on it is this: if the atheists of the world are correct, and everything concerning what we perceive to be a "soul" is just a series of synapses firing off in the brain to create our personalities, our loves, our interests, our dreams and our decisions; if our brains are merely organic computers that interpret the information we perceive around us and has built in it the programming to form conclusions, opinions and emotions, then theoretically, shouldn't it be possible to create an synthetic computer that can do the same thing?

That being said, if these concepts are inexplicably linked to some X-Factor (that for the sake of argument we'll call a soul), and it's existence is not quantifiable beyond the idea of it being only energy, then who knows?

But as it stands, computer programmers and software developers have been and are creating some pretty sophisticated programming that replicate all kinds of human behaviors. Currently though, I don't see Skynet declaring war on mankind anytime soon.
 

Sayvara

New member
Oct 11, 2007
541
0
0
dwightsteel said:
That being said, if these concepts are inexplicably linked to some X-Factor (that for the sake of argument we'll call a soul), and it's existence is not quantifiable beyond the idea of it being only energy, then who knows?
Alternate take on it: what if a man-made machine acquires a soul? "Sorry God.... you are now officially obsolete", hm? ;)

/S