Turing Test

The Turing test is a test to see if a computer can trick a person into believing that the computer is a person too. Alan Turing thought that if a human could not tell the difference between another human and the computer, then that computer must be as intelligent as a human. No one has made a computer that can pass the Turing test. A chatterbot called Elbot came close in 2008. The test was introduced by Alan Turing in his 1950 paper ‘Computing Machinery and Intelligence,’ which opens with the words: ‘I propose to consider the question, ‘Can machines think?” Since ‘thinking’ is difficult to define, Turing chooses to ‘replace the question by another, which is closely related to it and is expressed in relatively unambiguous words’:

‘Are there imaginable digital computers which would do well in the imitation game?’ (a party game in which a man and a woman go into separate rooms and guests try to tell them apart by writing a series of questions and reading the typewritten answers sent back; both the man and the woman aim to convince the guests that they are the other). This question, Turing believed, is one that can actually be answered. In the remainder of the paper, he argued against all the major objections to the proposition that ‘machines can think.’ In the years since 1950, the test has proven to be both highly influential and widely criticized, and it is an essential concept in the philosophy of artificial intelligence.

The question of whether it is possible for machines to think has a long history, which is firmly entrenched in the distinction between dualist and materialist views of the mind. According to dualism, the mind is non-physical (or, at the very least, has non-physical properties) and, therefore, cannot be explained in purely physical terms. According to materialism, the mind can be explained physically, which leaves open the possibility of minds that are produced artificially.

Researchers in the United Kingdom had been exploring ‘machine intelligence’ for up to ten years prior to the founding of the field of AI research in 1956. It was a common topic among the members of the Ratio Club who were an informal group of British cybernetics and electronics researchers that included Alan Turing, after whom the test is named.

In 1966, Joseph Weizenbaum created a program which appeared to pass the Turing test. The program, known as ELIZA, worked by examining a user’s typed comments for keywords. If a keyword is found, a rule that transforms the user’s comments is applied, and the resulting sentence is returned. If a keyword is not found, ELIZA responds either with a generic riposte or by repeating one of the earlier comments. In addition, Weizenbaum developed ELIZA to replicate the behavior of a Rogerian psychotherapist, allowing ELIZA to be ‘free to assume the pose of knowing almost nothing of the real world.’

Kenneth Colby created PARRY in 1972, a program described as ‘ELIZA with attitude.’ It attempted to model the behavior of a paranoid schizophrenic, using a similar (if more advanced) approach to that employed by Weizenbaum. In order to validate the work, PARRY was tested in the early 1970s using a variation of the Turing Test. A group of experienced psychiatrists analyzed a combination of real patients and computers running PARRY through teleprinters. Another group of 33 psychiatrists were shown transcripts of the conversations. The two groups were then asked to identify which of the ‘patients’ were human and which were computer programs. The psychiatrists were able to make the correct identification only 48 per cent of the time — a figure consistent with random guessing.

In the 21st century, versions of these programs (now known as ‘chatterbots’) continue to fool people. ‘CyberLover,’ a malware program, preys on Internet users by convincing them to ‘reveal information about their identities or to lead them to visit a web site that will deliver malicious content to their computers.’ The program has emerged as a ‘Valentine-risk’ flirting with people ‘seeking relationships online in order to collect their personal data.’

John Searle’s 1980 paper ‘Minds, Brains, and Programs’ proposed an argument against the Turing Test known as the ‘Chinese room’ thought experiment: suppose that there is a program that gives a computer the ability to carry on an intelligent conversation in written Chinese. If we give the program to an English speaker to execute the instructions of the program by hand, then, in theory, the English speaker would also be able to carry on a conversation in written Chinese. However the English speaker would not be able to understand the conversation. Similarly, Searle concludes, a computer executing the program would not understand the conversation either. Searle argued that software (such as ELIZA) could pass the Turing Test simply by manipulating symbols of which they had no understanding. Without understanding, they could not be described as ‘thinking’ in the same sense people do. Therefore—Searle concludes—the Turing Test cannot prove that a machine can think.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.