Searle and the Ontology of a Thinking Machine
    [A] computer program dealing with semantics? That's impossible - it's a contradiction in terms! [...] Don't you know John Searle's "Mandarin cabin" stuff? He shows beyond a shadow of a doubt that pure syntax can never lead to semantics.[1]
While it is often difficult to positively assert the non-existence of something, John Searle's "Chinese Room" argument may actually serve as an example of a way that we can, in good conscience, say that something does not exist, nor will it ever. Things that do not exist, most argue, would include things that are impossible. Searle's gedankenexperiment[2] was designed to demonstrate the falsity of "Strong-AI," which Searle defines[3] as the doctrine that programs are identical with minds (or thinking is manipulation of formal symbols). Why Searle chose the word gedankenexperiment rather than "thought experiment" is beyond me; maybe he felt that it gave it more power. If this is the case, he should not have stopped there; rather he should have said that his gedankenexperiment will demonstrate that,

~à($x)(x = starkek�nstlicheintelligenz)[4]

In this article I will argue on Searle's behalf, though I will concede that it is possible that I may be wrong. Nonetheless, this is a worthwhile discussion, as Searle started a heated debate on the ontology of a specific thing (a thinking machine), and the fact that Searle and his detractors have not been able to convince one another for the last 20 years demonstrates that there is a great deal of misunderstanding. Most classes at the undergraduate level that first introduce Searle's argument later give reasons why it is false, or at least invalid. My position is that I am no longer convinced by the arguments covered in such classes, and actually now side with the very lonely John Searle.

The thought experiment places the author (or some person like Searle[5]) in a room where he is passed slips of paper with symbols written on them. Searle, not knowing what the symbols mean, has a set of rules for responding to any given string of symbols with a different set of symbols. As it turns out, these rules allow Searle, a person ignorant of Chinese, to take part in a conversation in China's dominant idiom with a native speaker! People on the outside who insert these slips of paper into a slot on the wall of the room are engaging in some sort of Chinese version of the Turing test, and are wholly convinced that the room speaks fluent Chinese! The point is obvious: symbol manipulation does not equate with understanding; syntax alone is not sufficient for semantics.

To try to explain further how this works (and simultaneously introduce one of the objections I will consider), I would like to take a look at one hypothetical exchange. I unfortunately do not know Chinese, thus I would not be able to employ Chinese script in this paper. However, I do know just a smidgeon of Arabic, and if Hofstadter can poke fun at Searle's argument by calling it "the Mandarin Cabin," I too can employ an analogy via "the Arabic Tent."

Imagine a person who knows absolutely no Arabic is in a sealed Bedouin tent, and is passed a piece of paper through a slot that reads "". The person then searches through a rulebook, and finds orders to send out a slip that reads "". Now, if a non-Arabic speaker follows this rule, and convinces the person on the outside that he or she speaks Arabic, does this mean he or she actually knows Arabic? The obvious reality is that the person does not even need to know the Arabic script, despite the fact that, in clear Arabic, they answered a popular greeting with the common response.

With this method laid out before us, many would wonder how long it would take a person inside the Chinese Room, Mandarin Cabin, Arabic Tent, Hebrew Outhouse, or what have you, to engage in a meaningful conversation with a native speaker on the outside. Furthermore, just how large would the rulebook (or stack of rulebooks) have to be in order to anticipate and have a response to every string of characters? Hofstadter, in his rather rough and somewhat polemical response, put forth such spaciotemporal objections following way:
    Searle misleads readers about spatial and temporal orders of magnitude... [A]ny human-readable printout onto paper of an AI program that could pass the Turing Test in this manner would certainly contain billions (or more likely, trillions) of lines of text, and would therefore not be simply a normal, visualizable scroll of paper with some symbols on it, but would require truckloads if not an entire aircraft carrier full of symbol-covered paper.. Secondly... it would take a million seconds - some twelve days or so - to [answer "no" to a question about a hardboiled egg]... I have actually been enormously lenient in my estimates, and it would probably be more on the order of decades than days[.][6]
Anyone who would waste their time employing this mode of attack has clearly missed the point of Searle's argument, but alas this, mixed with a little ad hominem � la Hofstadter, makes up the entire objection as found in the fourth chapter of Le Ton Beau de Marot[7]. Searle's point is that a computer would be doing exactly what the man in the room would be doing, only in a much faster time, with a nearly microscopic rulebook (essentially recorded on some small portion of a chip).

The reality is that those who engage in the aforementioned spatiotemporal attack do so in an attempt to ignore the basic argument. Searle, in his demolition[8] of Fodor's "afterthought"[9] on Yin and Yang, sums up his argument with the following syllogism:
  • (1) Programs are formal (syntactical).
  • (2) Minds have contents (semantic contents).
  • (3) Syntax is not identical nor sufficient by itself for semantics.
  • (\) Programs are not sufficient for nor identical with minds; i.e. strong AI is false.[10]
It was Searle's point that Fodor ignored this syllogism. The reality is that most critics have ignored it. Those who take the spatiotemporal route are really just side-stepping the argument, and this is also the case for those who beat away at another dead horse: Searle's comments about the "right stuff."

A number of Searle's critics want to take issue with the fact that Searle made a biological argument for intentionality being something specific to brains. As Searle puts it, brains mentate similar to the way mammalian nipples lactate. Copeland goes on at length in trying to prove that, contrary to Searle's claim, one can indeed make a computer out of toilet paper[11]. Boden attempts to argue that Searle's comparison of biochemical synthesis of intentionality to photosynthesis is a bad analogy in light of the fact that we can clearly define the latter yet do not quite understand the former[12]. Dennett mocked Searle's paper for implying that intentionality was some sort of "marvelous mental fluid" rather than a mere byproduct of the input-output exchange taking place in the brain.[13]

I am of the impression that all these writers have succeeded in proving the point they wanted to get across. Indeed, Searle's rhetorical question about the alleged absurdity of thinking computers made out of toilet paper or beer cans was a subtle form of poisoning the well; it was a little sleight of hand - or sleight of mind as Hofstadter calls it - designed to suck the reader into agreeing with his biological argument.

Nevertheless, while I feel that Searle's plea for a strictly biological definition of intentionality has been demolished by the philosophers mentioned above, this still does not escape the general point captured in the syllogistic reconstruction of his Chinese Room argument. While undoubtedly these philosophers attack this argument because it appeared in Searle's original paper[14], it should be realized that the biological argument and the Chinese Room thought experiment can be seen as mutually exclusive. Taking time to refute this argument has resulted in bloated texts devoting far too much space to minutiae that is irrelevant to the real issue at hand: the implications posed by the 'Mandarin Cabin' gedankenexperiment that the ontology of a thinking machine is not possible.

Searle was able to anticipate two responses to his argument: the systems reply and the robot reply. In both instances, Searle accurately predicted two responses which have come to be the sticks with which his critics most often beat him. The systems reply is basically that while the man in the room does not understand Chinese, the room as a whole does. The robot reply is to imagine not just a stationary room that deals with input and output of formal symbols; rather we picture a robot that interacts with the world on a physical level, walking around, listening to speech and responding verbally. It is my opinion that Searle successfully defeated both arguments.

With regard to the system reply, Searle asked that we imagine the man in the room internalizing all the rules in the rulebook. Hofstadter mocked this in once again playing the spatiotemporal card, arguing that it was far too much information to memorize, and would take far too long to digest[15]. Copeland insisted that Searle's request gave rise to a deductively invalid argument, where Searle concluded that because a part of the system is ignorant of Chinese, the whole is ignorant as well[16].

Both responses are weak in my opinion. Hofstadter's point is not even worth considering, as we are presupposing that these commands can be memorized in the way that would be the case with a computer. As for Copeland, for all his talk of invalid arguments and circular reasoning, he has missed the point as well. To draw back on the example of the Arabic Tent, Copeland lost sight of the fact that a man who only remembers to reply to "" with the phrase "" does not actually know Arabic.

Dennett's version of the systems reply is similar to that of Copeland when he writes, "I understand English; my brain doesn't[.]"[17] The point here is that while your synapses may be ignorant, you as a whole are not. Searle missed an opportunity to capitalize on the fact that Dennett was leading with his chin, and instead opted to say that it is, contrary to what Dennett claims, his brain that understands English, just as it is his stomach that digests pizza[18]. A better reply would have been to remind Dennett that we are presupposing that the system as a whole (this can be any given person) has memorized a bunch of rules for symbol manipulation, and this alone is not sufficient for understanding. Dennett's appeal to his own understanding of English is a non sequitur in light of the fact that this understanding is not just syntax, but semantics as well!

The robot reply is the same game, and all have missed the point. Suppose you are inside the head of this robot. A woman in the real world says "pass the salt." You neither hear this command, nor see this woman, but on your screen appears the following string: 01-11-00-10-10-01-01-11. When you see this, your rulebook orders you to hit a number of buttons. You do so, and the hunk of metal (and maybe even artificial tissue) that is the robot says "sure" and passes the salt. For all this symbol manipulation, how conscious is the robot after all? How much does it understand?

Amazingly, however, the previously maligned Douglas Hofstadter does come close at one point to delivering a heavy blow to Searle's argument. This is found in Hofstadter's comparison of Searle's demon to Haugeland's demon[19]. Searle's demon is the man in the Mandarin Cabin. Haugeland's demon is a completely different entity altogether. It is the latter of the two that lives inside the head of a woman with brain damage. This woman's defective brain is no longer able to send neurotransmitters from one neuron to the next. Fortunately, Haugeland's demon is able to move with lightening speed from one neuron to the next, and with this neurological and demonic crutch, the woman's pathetically defective brain works as well as any other brain. According to Hofstadter's interpretation, Searle agrees with Haugeland that this woman thinks and understands.

If this is in fact the case, Hofstadter and Haugeland[20] have managed to turn the systems reply on its head! This is what we might call the "strengthened systems reply." Hofstadter takes it a bit further when he asks that we imagine God got in front of his creation-meter (the machine that makes, preserves, or destroys everything). The deity starts turning the knobs while editing the file on the woman's brain and her H-demon. The brain is turned into a room. The synapses and neurons are turned into slots. The neurotransmitters are turned into slips of paper, and Haugeland's demon is turned into Searle's demon!

The final result is a counterexample to Searle's Chinese Room argument, which was itself supposed to be a counterexample to Strong AI. The conclusion is that it is possible for a machine to actually be intelligent. If this is the case, Searle's argument has been refuted. The only question is how did Hofstadter defeat the syllogism mentioned above (if we defeated it at all)?

The reality is that it does not tackle Searle's syllogism. Hostadter might argue that it is the third premise (syntax alone is not sufficient for semantics, a syntactic program can never acquire semantics) that is being attacked,[21] but it would be difficult to really demonstrate this. Hofstadter's argument can be summed up as "if God exists, then it is possible for Strong AI to exist," or:

($x)(x = God) É à($y)(y = starkek�nstlicheintelligenz)

Then, however, we would have to wonder about the antecedent. Surely if God exists, then anything is possible for Him. The fact that Hofstadter has to invoke a deity in order to supplement his list of possibilia hints at the weakness of the argument. In short, Hofstadter's response is an example of a fantastic analogy that has gone too far. Nevermind the fact that semantic operations that take place in the brain of a human woman make the transition from person to room problematic. The fact that the strongest objection to Searle's thought experiment still falls short is what causes me to disbelieve in the existence of a purely syntactical thinking machine.



NOTES
  1. Dull Rigid Human (the alter ego of Douglas R. Hofstadter), in his fictional conversation with Ace Mechanical Translator (the alter ego of Alan M. Turing), from Hofstadter, Le Ton Beau de Marot, (Basic, 1997), p. 382.

  2. This is the term Searle uses; it is German for "thought experiment," but I guess it is more impressive in this foreign tongue. Humorously, a running joke about the term gedankenexperiment is that its real etymological origin is from Icelandic philosophers yelling "gott damn en experiment!" every time they dropped a squiggle or sqoggle on their toe.

  3. I say that this is the way Searle defines it simply because some have taken mild issue with the way he treats "Strong AI;" for example, see Jack Copeland, Artificial Intelligence: A Philosophical Introduction, (Blackwell, 1993), pp. 225-226.

  4. The German word starkek�nstlicheintelligenz is a compound of three words: starke (strong) k�nstliche (artificial) intelligenz (intelligence); in other words, "strong AI is not possible." Use of this word seems to make as much sense as the use of gedankenexperiment. It seems that a word is more magical, or at least seems more important, if it is uttered in a foreign tongue. For more on this use of the term "magic" with regards to raising words to a special status, see John Wilson, Language and the Pursuit of Truth, (Cambridge, 1967), pp. 32-36.

  5. Hofstadter calls this person "Searle's demon" (or "S-demon" for short); see Hofstadter and Daniel C. Dennett (eds.), The Mind's I: Fantasies and Reflections on Self and Soul, (Basic, 1981), p. 375.

  6. Hofstadter, Le Ton Beau de Marot, pp. 97-98. The passage has been severely edited down by me; Hofstadter's spatiotemporal attack is much more detailed, but the basic idea is captured in the quoted text. Hofstadter employs a similar argument when he mocks Searle's description of the rulebook as a "few slips of paper" in Hofstadter & Dennett, The Mind's I, p. 375. However, Hofstadter's first version of the spatiotemporal response is found in Hofstadter, "Reductionism and Religion," The Behavioral and Brain Sciences, Vol. 3, 1980, p. 434.

  7. To be fair, Hofstadter's argument that appeared 16 years earlier in The Mind's I is much more complex and effective.

  8. This "demolition" takes place in Searle's "Yin and Yang Strike Out," in David M. Rosenthal (ed.), The Nature of Mind, (Oxford University Press, 1991), pp. 525-526.

  9. Jerry Fodor, "Afterthoughts: Yin and Yang in the Chinese Room," in Rosenthal, The Nature of Mind, pp. 524-525.

  10. Searle, "Yin and Yang Strike Out," Rosenthal, Nature of Mind, p. 526.

  11. Copeland, Artificial Intelligence, pp. 132-139.

  12. Margaret A. Boden, "Escaping from the Chinese Room," in Boden (ed.), The Philosophy of Artificial Intelligence, (Oxford University Press, 1991), pp. 91-93.

  13. Daniel C. Dennett, "The Milk of Human Intentionality," in The Behavioral and Brain Sciences, Vol. 3, 1980, pp. 428-430.

  14. Searle, "Minds, Brains, and Programs," in The Behavioral and Brain Sciences, Vol. 3, 1980, pp. 417-424.

  15. Hofstadter & Dennett, The Mind's I, p. 375.

  16. Copeland, Artificial Intelligence, pp. 125-130.

  17. Dennett, "The Milk of Human Intentionality," Behavioral and Brain Sciences, Vol. 3, p. 429.

  18. Searle, "Author's Response," Behavioral and Brain Sciences, Vol. 3, p. 451.

  19. Hofstadter & Dennett, The Mind's I, p. 375-378.

  20. Hofstadter, as per the cited text from The Mind's I, was actually elucidating John Haugeland, "Programs, Causal Powers, and Intentionality," Behavior and Brain Sciences, Vol. 3, pp. 432-433.

  21. For Searle's alleged abuse of the dichotomy between semantics and syntax, see Hofstadter, "The Slippery Slope between the Two S-words," Le Ton Beau de Marot, pp. 504-507.

HOME
Hosted by www.Geocities.ws

1