Text

Imitation of Life

By Noel Weichbrodt

In an IRC1 chat room on Freenet2 called “#Who_am_I” there are two handles present, “I_am_a_person” and “No_I_am_a_person”. When you enter this chat room, you are asked by “I_am_a_person” to determine, through conversation, which one of the two handles is being used by a person, and which is being used by a computer. You are allowed to chat with both of them for as long as you wish, and ask of the two any question you can think up.

What questions would you ask? Perhaps you would start out with formalities, such as “How are you?” followed by “How is your day going?”. Next, you could ask factual queries: “What length is your hair?” or “Add 34957 to 70764”. Artistic endeavors would be a good idea; you could ask for another verse of the song this essay is titled after, R.E.M.’s “Imitation of Life”. You might ask “Please write me a sonnet on the subject of the Golden Gate Bridge”, or “Would you say that Dr. Davis reminded you of a koala?” As a spiritual interrogator, the questions “Do you believe in God?” and “Do you have a soul?” would be important to know, as well.

Let’s forget the telos of this chat for a moment, and concentrate on the being behind the handle. Perhaps, in the course of conversation, you had become emotionally attached to “No_I_am_a_person”. A personal bond has been forged—it allowed you to see some of it’s most personal thoughts, which you found quite agreeable. In response, you let a few personal things of your own life slip into your questions to it. Perhaps “No_I_am_a_person” mentioned it was of the opposite sex, and a bit of flirtation ensued: “’Are you involved with somebody?’ ‘Well, I am with you right now!’ ‘I don’t know about this conversation counting as ‘involved.’’ ‘We could meet in person and see...’”

Given all this time, emotion, and thought that you have invested with “No_I_am_a_person”, could you choose between it and “I_am_a_person” by saying one was a computer? Such a decision would be hard, but the strength of the Turing Test depends on exactly how hard your choice is.

I would like to posit that the Turing Test’s3 ability to discern intelligence is quite strong, and that it’s strength rests on the very things who imagined it and who will use it: human beings.4 This strong understanding of the Turing Test is not a popular position to take in the current climate enveloping the philosophy of artificial intelligence, but a careful study of what Turing’s original thoughts were about what he called the “imitation game” will show that what passes the imitation game will be capable of thought, because Turing ties thinking to people. I examine some background on the Turing Test, and look at a serious objection to the Turing Test’s strength. I will also suggest two features that I have thought essential to making a machine that will pass the imitation game: true English thought, and a simple elegance that graces the hardware and software implementation of a imitation-game-passing machine.

My move to make the Turing Test a strong test for AI is simple: faith in the Turing Test is faith in man; lack of faith in Turing Test is lack of faith in man. In both Turing’s proposal for the test, and upon contemplation of his proposal, man’s central role is clear; the Test is as strong as the person judging the test.

There are two points drawn from Turing’s paper in this move. First, the human judge, you in the thought-experiment that started this paper, is the person who decides if the computer thinking in the imitation game. The more discerning the human judge, the more difficult for a computer to pass the Turing Test. Second, the Test is to determine whether the computer thinks, which I think is key. Turing leaps directly from the question “Can machines think?” to the resolution of the question: machines can think if we can’t distinguish their thoughts from human thought. Again, humanity determines the level that machines think at. Both of these points show that faith in the Turing Test is faith in humanities abilities.

Before discussing whether this faith is well-founded, let us look at what our faith lies in regarding the Turing Test. I would not trust a random Joe to determine whether a machine thinks. I also would not trust a random Joe to determine if I think. And really, I wouldn’t trust a random Joe to determine much of anything. But what if the Turing Test was judged, not by one, but two luminaries of humanity?

This is exactly what will happen in the future, as the Long Bet made by Mitchell Kapor5 and Ray Kurzweil6 takes place.7 The two have placed a bet8 that by 2029 a computer or machine intelligence will pass the Turing Test. The rules for the bet stipulate that Kapor and Kurzweil will be the judge of the test(s), and that either one may call the test to take place at or before 2029.

These two men can determine intelligence, if any men can. Both demonstrate the desired traits of cross-discpline creativity, adaptability, intelligence, taste, etc that we expect in humans and demand of imitation-game-passing machines. Both men know well the implementation details of current AI attempts, and are thus able to ask questions in the imitation game that will expose flaws in the machine itself. This instance of the Turing Test, when it happens, will be distinguished by having two of the best judges in the world determining if a computer thinks.

These men will be exercising a uniquely human privelege afforded them by the Bible. In the creation mandate in Genesis of the Bible, God gave man the authority to name the animals he had created, and also gave man the task of subduing the earth. I see man’s testing thoughts of computers to be an extension of this creation mandate. In making such judgements, I believe God has given man certain discerning abilities which allow man to make true judgements about creation, including whether something thinks or not. And, obviously, some men are gifted with this discerning ability more than others.

Kapor and Kurzweil judging is far different than any previous attempt at the Turing Test. The Turing Test, in diluted forms, has been passed. Parry the Paranoid Program convinced psychologists who chatted with him via computer terminal in the early 1950’s that he was a truly human paranoid hostile.9 A second partial pass was by Eliza the Psychotherapist, also around fifty years ago. She actually helped people through their personal troubles by imitating the role of a psychotherapist, getting to the root of deeply personal problems.10 Of course, these programs can easily be shown to be simulacrums if you know how they work.11 You can even train your own AI to pass a simple Turing Test with MegaHAL.12

As I think, there is questions regarding the strength of the Turing Test. The most serious and aware objection is Jack Copeland’s “Superparry” objection. The Turing Test might be passed by a program with enough storage and processing power, Copeland points out.13 He gives a thought experiment of “Superparry” that has in its memory the finite set of every possible English sentence of 100 words or less.

What Copeland does not say, but I see as a very real possibility, is hooking Superparry up to the Internet, and giving him Google14 access. If Superparry is asked to write a poem, he could search for examples of poetry that are rarely visited on the Internet, virtually guaranteeing that the judge will have never heard the poetry before, and thus fooling the judge to believe that Superparry himself wrote the poem. Again, if Superparry is asked to give an explanation for having a soul, he could search the Internet and find a suitable explanation, using it as his own words. Find such suitable explanations would be programmed so that, using raw computational pattern matching, it takes the appropriate amount of time to search through all possible responses to a query (assumed to be fifty words or less, but that limit is a self-imposed limit), and selects the best one. This “best” answer would be determined by such things as semantic appropriateness, pattern-matching, inductive rules, and so forth.

Such a program, one sees, could pass a Turing Test using its raw storage and computational powers, without really realizing the meanings of the words it would use, thus not thinking and rendering invalid Turing’s move to make find a way to make thought and humanity equivalent. Moore’s Law15 makes this possible in the near future. Brute force problem-solving has already made a large impression with Deeper Blue, IBM’s chess champion computer that used brute force computations to beat chess Grandmaster Gary Kasperov in 1997. Deeper Blue was able to explore 200,000,000 positions per second going down fourteen levels.16 With that kind of raw computing power, can a machine that can explore every possible response of 100 words or less to a question and select the best one be far behind?17

Turing envisioned this very thing happening, I think. He says “Let us fix our attention on one particular digital computer C. Is it true that by modifying this computer to have adequate storage, suitably increasing its speed of action, and providing it with an appropriate programme, C can be made to play satisfactorily the part of A in the imitation game...” Turing envisioned a Superparry, and still thought his test to be a strong one. I too retain faith that a sufficiently discerning person could discern a Superparry from a true human using the imitation game’s Socratic dialogue. Perhaps he might give it some question that required a person-specific metacommentary on the machine, one that would not be possible to find a simulacrum of on the Internet.

Finally, let me make two suggestions on how to pass the Turing Test. The Test will not yield to brute-force, as chess yields for Deeper Blue and knowledge for Superparry. I say this because we consider ourselves thinking by this metacommentary constantly running through our head that discusses ourselves. We can even have a metacommentary on the metacommentary. Through recursion, can machines do this? Could a machine ever write a portion of Descartes Meditations? I do not think a brute-force-type machine like Superparry could do so.

So, my first suggestions: In computer science and mathematics, there is a notion of an “elegant solution” or “elegant hack” to a problem. This solution, when looked at from a metamathematical standpoint, is remarkable for its simplicity and beauty. In humans, our simple set of DNA patterns gives us amazing complexity. If such elegance were to be found in an AI, both in hardware and in software, I think the machine could be capable of passing the Turing Test.

My second suggestion is to make a machine that does not attempt to reduce the language of humans into a simpler, less adequate syntax. Instead, it should be able to comprehend full sentences of English (or whatever language(s) it is being tested in) so that it may respond in kind. Such a machine would lose a certain determinacy and computational usefulness, but that is a tradeoff we make as humans too.

Alan Turing wrote a remarkable paper fifty years ago, bursting with prophetic predictions and a pretiscent test for thought. This test, because it is based on humanity, is a strong test to determine what thinks. Through the test, we can see and discern life, and the imitation of life.



Appendix I: Key Conversations in AI

Eliza the Psychotherapist

A conversation between Eliza and a teenager. The girl speaks first.

Men are all alike

In what way?

They’re always bugging us about something or other

Can you think of a specific example?

Well, my boyfriend made me come here.

...

Who else in your family takes care of you?

My father

Your father?

You are like my father in some ways.

What resemblance do you see?

You are not very aggressive but I think you don’t want me to notice that.

...

Why do you think I don’t argue with you?

You are afraid of me.

Does it please you to believe I am afraid of you?

My father is afraid of everybody.

What else comes to mind when you think of your father?

Bullies.

Does that have anything to do with the fact that your boyfriend made you come here?

More of these amazing conversations are printed in Copeland, p. 13.



Parry Meets Eliza



As detailed in Hofstadter, p. 599.

Parry: I don’t understand your motives.

Doctor: Why don’t you understand my motives?

Parry: Shouldn’t I be?

Doctor: Why do you say that just now?

Parry: That’s just the way it is.

Doctor: Suppose that’s just the way it were not.

Parry: So?

Doctor: I am not sure I understand you fully.

Parry: You harp on the same questions too much.

Doctor: What does that similarity suggest to you?

Parry: I don’t get you.

Doctor: Do you wish that you did get I?

Parry: It’s about time you showed some interest in my feelings.

Doctor: Oh? I showed some interest in your feelings?



Eliza Talks to Copeland



This illustrates that a sufficiently informed and discerning interrogator can distinguish between true thinking and an imitation of it, as given in Copeland p. 40.

Eliza: How do you do. Please state your problem.

Copeland: I’m wondering whether computers can think.

Eliza: You do you mention computers?

Copeland: Since you are one I thought I would ask you—can you think?

Eliza: Did you think they might not be one you thought you would ask I—can I think?

Copeland: Let’s talk just about you—not me. Can you think?

Eliza: You like to think I—not you—don’t you?

Copeland: Pardon?

Eliza: What does that suggest to you?

Copeland: What does it suggets to you?

Eliza: Tell me about what it suggest to I.

Copeland: That’s nonsense—which of us do you mean by I?

Eliza: Why do you think I mean you?

Copeland: The question meant to which of us is the word referring.

Eliza: Suppose the question meant to which of us were not the word referring.

Copeland: I think you are an extremely stupid program.

Eliza: Perhaps you would like to be an extremely stupid program.



Digital Love



As written by the Ferranti Mark I, 1951, From Copeland, p. 31.

Darling Sweetheart,

You are my avid fellow feelings.

My affection curiously clings

To your passionate wish. My

Liking yearns to your heart. You

Are my wistful sympathy: my

Tender liking.

Yours beautifully,

Manchester University Computer

1 Internet Relay Chat.

2 A popular IRC server

3 The “Turing Test” I am referring to can be found in A. M. Turing’s paper “Computing Machinery and Intelligence”, first published in Mind 59 from 1950, p. 433-460. I highly encourage the reader to peruse Turing’s sophisticated, prophetic, and charming paper for themselves at http://www.loebner.net/Prizef/TuringArticle.html

4 This thesis clicked as I read Douglas Hofstadter’s Gödel, Escher, Bach: An Eternal Golden Braid. Basic Books, 1999. pages 594-599.

5 For Kapor’s luminary status, see http://www.kapor.com/homepages/mkapor/bio0701.htm

6 For Kurzweil’s luminary status, see http://www.kurzweiltech.com/raybio.htm

7 For background on the Long Bet Foundation, see http://www.longbets.org

8 See the bet, and essays supporting both sides, at http://www.longbets.org/bet/1

9 Artificial Intelligence: A Philosophical Introduction by Jack Copeland, Blackwell 1993. p. 12.

10 See Appendix I: Key Conversations in AI

11 See Appendix I again

12 Download and play at http://www.megahal.net/

13 Artificial Intelligence: A Philosophical Introduction by Jack Copeland, Blackwell 1993.

14 The best current search engine technology. http://www.google.com

15 Processor speed will double and prices will halve every eighteen months.

16 “Assessing Artificial Intelligence and Its Critics” by James H. Moor, in The Digital Phoenix: How Computers Are Changing Philosophy edited by Terrell Bynum and James Moor, Blackwell 1998. p. 214.

17 In a curious side note, Turing makes a very specific conjecture about his imitation game. He states that “in about fifty years’ time it will be possible to programme computers with a storage capacity of about 10^9, to make them play the imitation game so well that an average interrogator will not have more than 70 per cent chance of making the right identification after five minutes of questioning.” I have not seen records of any such test made in 2000, fifty years after Turing wrote.