This is a difficult discussion. On one side, it could be very awesome and useful. Of course, a machine which think for itself and which you won't have to tell what to do every five minutes is practical.
On the other side, it could go horribly wrong, The Matrix style. Machines overthrow us, we have to fight the machines.
Since I don't trust humanity (inclusing myself) with great power and dangerous technology I'm going to have to say... No this should not be invented.
EDIT: No, AI is not already invented. What I mean by this AI is a program which can genrate its own logic. I don't mean like the stuff in games, where enemies actually shoot at you, and not at the door you just came through. This has been pre-written into it, and is simply a result of clever programming. If it can genrate solutions for problems we didn't write into it, it's AI. Like if your calculator would be able to solve grammar problems.
On the other side, it could go horribly wrong, The Matrix style. Machines overthrow us, we have to fight the machines.
Since I don't trust humanity (inclusing myself) with great power and dangerous technology I'm going to have to say... No this should not be invented.
EDIT: No, AI is not already invented. What I mean by this AI is a program which can genrate its own logic. I don't mean like the stuff in games, where enemies actually shoot at you, and not at the door you just came through. This has been pre-written into it, and is simply a result of clever programming. If it can genrate solutions for problems we didn't write into it, it's AI. Like if your calculator would be able to solve grammar problems.