Copyright © 1998, Glenn Mason-Riseborough - where what I mean by "copyright" is spelled out eloquently by Peter Suber on his Copyright page (and taking it as read, in the context, that where Suber refers to his documents, pages, and site I am referring to my documents, pages, and site).

However, whereas Suber is a professional academic, I am not. Before reading any further, read my disclaimer and warning on my My Writings page.

Back to My Home Page | Back to What is Philosophy? (in a nutshell) | Back to My Writings

Responses to Functionalism –The China Brain and The Chinese Room

Glenn Mason-Riseborough (26/10/1998)

 

1.0 Introduction

While functionalism comes in many different forms, it generally holds that our psychological make-up is determined by our particular functional construction.  This means that functionalism maintains that if something is a functional duplicate of me, it should have mental states identical to my own.  Ned Block’s China Brain and John Searle’s Chinese Room are both attempts to expose weaknesses in functionalism.  They are both attempts to theoretically create something of functional similarity to human beings but stretches (and hopefully breaks) our intuition that these constructions think.  In other words, functionalism will state that these beings will have mental states but our intuitions will disagree and we will be forced to deny functionalism.  In this essay I will examine both of these attempted counter-examples (the China Brain and the Chinese Room) in turn.  I will outline the structure of these arguments, then examine whether or not they succeed in proving what they set out to prove, namely the falsity of functionalism.  In discussing these two arguments, I will also address the issue of where and how they differ in their attempts to disprove functionalism

 

2.0 China Brain

The China Brain thought experiment was originally developed and expressed by Ned Block in Troubles with Functionalism.  In this essay I shall be using two main sources for describing this putative counter-example to functionalism.  They are Braddon-Mitchell and Jackson (1996) and Block (1990).

In the China Brain thought experiment, Block supposes that we convert the government of China to functionalism.  Each of the one billion citizens is given a two-way radio and unique instructions on whom to contact based on who called him or her.  Thus for example, Citizen X will contact Citizen Y if and only if Citizens A, B, and C contact Citizen X within one minute of each other, and Citizen X is not contacted by Citizens D or E during that time period.  Naturally, we may make the instructions as simple or as complex as necessary.  In my updated version I will give each citizen a portable computer with an Internet connection rather than a radio.  This may be more expensive, but in a thought experiment this is the least of our concerns, and it will probably enable faster and more productive management of the incoming and outgoing transmissions.  I can just imagine a billion citizens sitting down and managing their incoming e-mail and sending mail at the appropriate occasion.  The attempt here by Block is to try to emulate neural communications within the brain.  Each individual neuron can be thought of as a simple input/output device, and each Chinese citizen fulfils the role of one neuron.  For neurons, the inputs happen at the dendrites and cell body, and the outputs at the terminal buttons of the axon.  The decision on whether to contact the next cell is determined by whether the inputs are excitatory or inhibitory and whether or not they add up sufficiently to make an all or nothing output.  As such, the procedure is merely mechanical; there is no complex thought required.  Likewise for the Chinese citizen, his/her task is completely mechanical.  He/she is not required to go outside the parameters of his/her instructions.

Of course, as large as the population of China is, it is not nearly as large as the number of neurons in a human brain.  In addition, as Braddon-Mitchell and Jackson (1996) point out, these citizens are likely to get bored or make mistakes.  Realistically speaking, we are not going to be able to hand out one billion computers or radios, teach the citizens how to use them as we wish them to, and expect them to do so.  The point here though is that although the system is unrealistic in practice, this interconnected system of individuals is supposedly analogous to a human brain.  A functionalist generally thinks that the human brain is set up in a functional way that enables that human to have psychological states.  Yet the China Brain thought experiment challenges this intuition by creating a functional duplicate that on the surface does not induce the intuition that it thinks.

Now, it is rightly argued that the China Brain is not a proper brain because it needs to have appropriate sources for the inputs and outputs.  So to remedy this, we will build a mechanical android.  This android has a radio link to its China Brain.  The inputs of the brain then come from sensory devices on the android such as light, heat, or sound detectors.  The outputs go to motor control.  Thus, our China Brain is now controlling an android that has the capability of performing similarly to a human or other animal.  It should not matter to a functionalist that the brain of the android is external to the android itself.  If set up correctly, the android should be able to walk, talk and act exactly like a human being.  In fact, the functionalist should be forced to say that the android also has mental states in exactly the same way as a functionally similar human has.  According to Block and others who use the China Brain argument, this is something that does not seem intuitively correct.

A functionalist who holds this intuition would have to try to come up with reasons why the China Brain and android are not sufficiently of functional similarity to humans.  One way to do this would be to assert that the temporal and spatial differences preclude this similarity.  No matter how efficient they are, humans communicating via e-mail are not going to be as fast as neurons communicating electrically and chemically.  The android may react so slowly in comparison to humans that it may be tempting to suggest that it does not have mental states.  In addition, the China Brain is so dispersed in location that it does not seem appropriately connected.  These are not good responses however, because they seem to put undue restrictions on what sort of beings have mental states.  It seems very anthropocentric to suggest that mental states are the exclusive property of only human sized beings, with human-like perceptions of time.  It would seem to be logically possible to imagine a being whose brain was dispersed over an entire country or even a planet.  It is also logically possible for this being to react so slowly that we cannot see its mental states change in our own lifetime.

In my view the best option for a functionalist is to bite the bullet and accept that the China Brain plus android have mental states.  In my view it is no great leap from assuming that other humans have mental states to assuming that the China Brain connected to an android has mental states.  If we accept this, then the China Brain is not a successful counter-example to functionalism because our intuitions agree with the functionalist claim.

 

3.0 Chinese Room

The response to functionalism known as the Chinese room is, according to Braddon-Mitchell and Jackson (1996), one of the most famous examples in the philosophy of mind.  It was first formulated by John Searle, and the version that I will be basing my argument on is given by Braddon-Mitchell and Jackson (1996).

In this thought experiment we set up a system such that a person (we will call him Fred) is placed in a locked room.  Now Fred knows English but he does not know any Chinese.  The room that Fred is imprisoned in contains nothing except for a computer monitor, a keyboard and a database in English.  The only thing that this database tells Fred is the algorithm for converting any string of Chinese characters into some other string of Chinese characters.  Naturally, since Chinese is a large and complex language this database is extremely large and complex.  In fact, this database is so complex that it varies the output based on the number of times the input has previously appeared.  Periodically, strings of Chinese characters appear on Fred’s monitor.  It is Fred’s task to follow the algorithm given by the database to determine what string of characters he types in response on the keyboard.  Fred is not aware of this, but the input character string is actually a meaningful story with relevant questions, and Fred’s output is a meaningful answer.

We have a situation here in which Fred does not understand Chinese, yet he is able to give meaningful responses to meaningful questions.  It seems then that despite our intuitions we must say that the system of Fred and the database together understand Chinese.  The system of the Chinese Room will thus pass the Turing test of intelligence.  On the surface this seems a counter-example to functionalism.  However, on reflection Braddon-Mitchell and Jackson (1996) deny that the Chinese Room system really understands Chinese from a functional perspective.  While they admit that it understands Chinese syntax, they deny that it understands the semantics of Chinese.  The Chinese Room system cannot attach a Chinese word to what it stands for.  Thus, it cannot understand Chinese.  The Chinese Room is functionally different from a human, and as such agrees with the intuition and does not challenge the functionalist view of the mind.

Braddon-Mitchell and Jackson (1996) attempt a patch for this by connecting an android to the system in a similar way as the China Brain thought experiment.  This android interacts with the outside world and essentially uses the Chinese Room system as a brain.  Fred’s output string is attached to the android’s voice box and it appears to respond intelligently.  However, as with the China Brain, this modification means that not only do we functionally accept the extended Chinese Room system as understanding Chinese, but we also now accept it intuitively.

Thus, with both forms of the Chinese Room, we have not arrived at a situation in which a functionalist must accept it as understanding Chinese, but we reject it intuitively.  Hence, the Chinese Room is not a counter-example to functionalism.

 

4.0 Differences between the China Brain and the Chinese Room

The China Brain and the Chinese Room have the obvious similarity of both being about China and are both failed attempts to counter functionalism.  However, there are some differences, at least between the China Brain and the unmodified Chinese Room.  When we examined the China Brain (connected to the android) both functionalism and our intuitions were in agreement that the China Brain had mental states.  The argument against functionalism failed because we agreed with the functionalist claim.  However, in the unmodified Chinese Room response functionalism agreed with our intuitions that the system did not understand Chinese.  This showed us that the functional role of semantic understanding, not just syntactic understanding is crucial to what it takes to understand something. Thus, both attempts fail as counter-examples to functionalism, but both for very different reasons.

 

5.0 Conclusions

Both Ned Block’s China Brain and John Searle’s Chinese Room are unsuccessful in their attempt at being counter-examples to functionalism.  This is because neither response is able to give a situation in which the functionalist claim is at odds with what we intuitively think to be correct about mental states.

 

Bibliography

Block, N. (1990). Excerpt from “troubles with functionalism.” In W. C. Lycan (ed.). Mind and cognition: A reader. Oxford: Blackwell.

 

Braddon-Mitchell, D. & Jackson, F. (1996). Philosophy of mind and cognition. Cambridge, Massachusetts: Blackwell.

Hosted by www.Geocities.ws

1