Eipok,
Thanks for the summary. Quite helpful.
Highlights I'm using to form my opinion:
A model of curiosity (think of it as allocating attention) is the centerpiece of general-purpose learning machines. John Henry is no exception. Remember when the psychologist says it's "bored"? Why, that means it's exhausted the learning possibilities of his little brain-in-a-box environment.
It seems like, along with that, John Henry has a very definite desire not only to learn through observation but to interact with its world. You said it's even a bit "aggressive" in its experimentation.
So, it's got these drives. Kinda like instincts or emotions. Narratively, they're byproducts of trying to make an AI that would teach itself chess, more or less. You can work with these to teach it stuff. Let's just hope that making an AI that wants to play (and win) chess didn't result in an AI that feels the desire to destroy or conquer.
So -- and I'm brushing the cobwebs off of my "optimist" hat here -- I'd say that John Henry has the potential to learn to functionally coexist with other beings. I think the little "joke" incident really represents a deep-seated need to communicate, which I think could grow into full-fledged empathy if someone can guide John Henry to attain some understanding of the human culture around it. It's got more in common with us than it seems: it's not human but it is living in a human world, thinking in terms of human language, and now it's in a humanoid body, too.
...
Just-for-fun side note:
Modern chess AIs are mostly just tricky optimized tree-search algorithms. Machine learning techniques are more helpful with other games, such as backgammon, which has moves constrained by a random die roll (TD-Gammon uses a neural network trained with temporal-difference learning; master players have learned new strategies by analyzing its play), or go, which has a very high branching factor (no truly masterful bots exist at the moment -- note that you're not allowed to enter supercomputers into go tournaments, the rules say that your software has to run on a single consumer-grade desktop when it plays).
-- Alex
Thanks for the summary. Quite helpful.
Highlights I'm using to form my opinion:
Hmm, okay, maybe I was too hasty in calling it old-school-sci-fi AI. It seems like the writers were going for something a bit less trope-ful.Eipok Kruden said:Here it goes: There was this guy named Andy Goode who was building this chess playing computer.
...
He created a new computer and learned from his mistakes, making an AI that was a very aggressive learner. While the Turk 1 (he named the computer "The Turk") was like a normal teenager, the Turk 2 was like a highly gifted child. He entered the Turk 2 into a chess competition and it got into the finals, but lost against the other finalist, a japanese computer. It seemed too human in the way it played, taking risks and testing out the other computer. In the end, that's what made it lose.
...
The psychologist figured out why the Turk was showing all these pictures, he was making a joke. Here: http://www.youtube.com/watch?v=IOJWusFQGNQ&feature=related
...
The Turk (now named John Henry after Catherine named it) re-routed power from the air conditioning and air circulation systems to power its server farm and cooling system which left the psychologist to die a painful death in an overheated air tight room.
A model of curiosity (think of it as allocating attention) is the centerpiece of general-purpose learning machines. John Henry is no exception. Remember when the psychologist says it's "bored"? Why, that means it's exhausted the learning possibilities of his little brain-in-a-box environment.
It seems like, along with that, John Henry has a very definite desire not only to learn through observation but to interact with its world. You said it's even a bit "aggressive" in its experimentation.
So, it's got these drives. Kinda like instincts or emotions. Narratively, they're byproducts of trying to make an AI that would teach itself chess, more or less. You can work with these to teach it stuff. Let's just hope that making an AI that wants to play (and win) chess didn't result in an AI that feels the desire to destroy or conquer.
So -- and I'm brushing the cobwebs off of my "optimist" hat here -- I'd say that John Henry has the potential to learn to functionally coexist with other beings. I think the little "joke" incident really represents a deep-seated need to communicate, which I think could grow into full-fledged empathy if someone can guide John Henry to attain some understanding of the human culture around it. It's got more in common with us than it seems: it's not human but it is living in a human world, thinking in terms of human language, and now it's in a humanoid body, too.
...
Just-for-fun side note:
Modern chess AIs are mostly just tricky optimized tree-search algorithms. Machine learning techniques are more helpful with other games, such as backgammon, which has moves constrained by a random die roll (TD-Gammon uses a neural network trained with temporal-difference learning; master players have learned new strategies by analyzing its play), or go, which has a very high branching factor (no truly masterful bots exist at the moment -- note that you're not allowed to enter supercomputers into go tournaments, the rules say that your software has to run on a single consumer-grade desktop when it plays).
-- Alex