Wednesday, January 14, 2015

Artificial Intelligence I: The Limitation Game.


I propose to consider the question, "Can machines think?" This should begin with definitions of the meaning of the terms "machine" and "think."
Alan Turing, Computing Machinery and Intelligence
On Christmas Eve, I attended an afternoon showing of The Imitation Game, the heavily fictionalized but well acted biopic regarding the life of Alan Turing, the noted mathematician, computer theorist and World War II cryptographer whose life ended in disgrace and presumed suicide following his arrest for homosexuality in 1952.*  As a science fiction fan, I was surprised by the title, which has nothing whatsoever to do with cryptography. "The Imitation Game" is a reference to what is more commonly known as the Turing Test, as detailed by Turing in his 1951 paper, Computing Machinery and Intelligence

For those of you unfamiliar with this touchstone of artificial intelligence theory, the Turing Test is very simple.  A judge sits in one room, and in two other rooms are a human being and a computer.  The human being and the computer can only communicate with the judge via text displayed on a computer screen. (I believe that in the original version, the questions and answers were paper based, but monitors and keyboards certainly speed things up.) It is the judge's job to decide which one of the communicants is the computer, based on their interaction.  It is the human's job to be a human, and the computer's job to imitate a human**.

Turing's simple experiment for establishing artificial intelligence is a standard reference for science fiction authors: there are Turing scales for degree of AI, Turing certifications, and more threateningly, William Gibson's Neuromancer introduces the idea of the Turing Police:  an international organization responsible for the elimination of unauthorized or rogue AIs.

Science fiction aside, real-world computers have miserably failed the Turing Test, as demonstrated by the annual Loebner Prize competition, originated by American inventor Hugh Loebner in 1990.  To date, no computer - or more accurately, no computer program -  has managed to win the $100,000 award by successfully convincing the judges of its humanity.  Several smaller prizes have been awarded to the best program, but so far it's really been to acknowledge the best of a poor lot.

However, it's an interesting conceit to demand that a computer convince someone that it's a human. Why should the ability to mimic humanity be a requirement for consciousness or sentience?

It's easy to say that artificial intelligence would need to be based on the human mind, what else do we have to use as a model?  On the other hand, there's no other area of technology that follows this path: cars run on wheels rather than mechanical legs, and cranes don't feature huge arms with hands and fingers to pick up cargo.  Technology has always been used to exceed the limitations of the human form rather than imitate them, and artificial intelligence might do well to take the same approach.

Maybe we need to come up with a new name for the game.
- Sid

*  At that point in time, homosexuality was a criminal offense in England.

** Based on the way that people react to tests, I actually suspect that in practice they both end up trying to imitate a human.

No comments:

Post a Comment