SnakeoilSage said:
FarleShadow said:
Of course they are going to get progressively smarter until they are smarter than we are, we're bound by the limits of biology and neurotransmitters. Computer minds would be bound only by the limits of whatever medium they run on (So they'll be crazy-fast).
The most powerful computer right now has less than 1/6th of our brain's memory capacity and performs less than 1/10th of the calculations our brains process every second. I don't have the exact data on me but the math was done recently.
Don't assume that because lifting your arm seems effortless to your consciousness doesn't mean that your brain isn't working countless calculations to perform the task.
*Bit about coding*
People keep expecting some kind of singularity, that just because a Chess program has been programmed with thousands of potential moves to react to human actions, that AI will somehow explode out of the Internet. It won't. No amount of information uploaded into a computer or series of computers will give it the capacity to perceive that information as anything but data to be utilized by the humans accessing it. There is no "awareness" examining the information, no computer attempting to process the info for itself.
The fact that we can't even define sentience ourselves proves that it is a state of being that we cannot emulate no matter how advanced our technology is, because our biological brains with
millions of years of programming cannot comprehend its meaning yet.
Wow, that was incredibly patronising and insultingly easy to refute.
For one, the comparison between human mind and machine breaks down immediately, the fastest computer in the world can perform 260
trillion operations a second while human minds can only react to a stimulus after 120 miliseconds, at best. Our own biological nature limits our maximum speed of thought, machines are not limited in this regard.
Memory falls into a different catagory, human memories are very efficient in packing stuff in, but are laughably inaccurate when it comes to decent recall (Since we filter input based on our own preconceptions and expectations) and tends to ditch information we don't need very rapidly. Computer memory suffers from none of these benefits and flaws, what is recorded is a near-perfect record.
And yes, while it is true I do not consciously call my arm to raise when I need to pick something up, the process isn't as complicated as you make it out to be, spatial awareness, the specific length of my arm, coded instructions to the muscle groups and, finally, engagement of the 'get thing' are all very easy for computers to replicate. Willow Garage's PR2 can perform an identical (If slow) feat with a simple command.
And so, the nature of how an AI might come about from simply pumping more data into a program? Hardly! You don't get intelligence from a database, nor, I suspect, do you actually know much of what you speak in this regard. My offer? A large system of genetic algorithms, designed to catagorise information and continually improve its accuracy. From that point, I magine its only a matter of time before it becomes aware of itself in a machine-awareness point of view. But that's speculation of the wildest sort.
It is not a fact that we cannot define sentience, it is hard, but still within our understanding given time, nor does the lack of that definition mean that the emulation of sentience is impossible. It could potentially be that very first, randomly spawned AI gives us the definition purely in the act of its creation.