Friday, January 25, 2008
The Chinese Room argument is a thought experiment designed by John Searle (1980 [1]) as a counterargument to claims made by supporters of strong artificial intelligence (see also functionalism).
Searle laid out the Chinese Room argument in his paper "Minds, brains and programs," published in 1980. Ever since, it has been a mainstay of the debate over the possibility of what Searle called strong artificial intelligence. Supporters of strong artificial intelligence believe that an appropriately programmed computer isn't simply a simulation or model of a mind; it actually counts as a mind. That is, it understands, has cognitive states, and can think. Searle's argument against (or more precisely, his thought experiment intended to undermine) this position, the Chinese Room argument, goes as follows:
Suppose that, many years from now, we have constructed a computer that behaves as if it understands Chinese. In other words, the computer takes Chinese characters as input and, following a set of rules (as all computers can be described as doing), correlates them with other Chinese characters, which it presents as output. Suppose that this computer performs this task so convincingly that it easily passes the Turing test. In other words, it convinces a human Chinese speaker that the program is itself a human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese-speaking human. The conclusion that proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does.
Now, Searle asks us to suppose that he is sitting inside the computer. In other words, he is in a small room in which he receives Chinese characters, consults a rule book, and returns the Chinese characters that the rules dictate. Searle notes that he doesn't, of course, understand a word of Chinese. Furthermore, he argues that his lack of understanding goes to show that computers don't understand Chinese either, because they are in the same situation as he is. They are mindless manipulators of symbols, just as he is — and they don't understand what they're 'saying', just as he doesn't.
Thought experiments
In 1984 Searle produced a more formal version of the argument of which the Chinese Room forms a part. He listed four premises:
The second premise is supposedly supported by the Chinese Room argument, since Searle holds that the room follows only formal syntactical rules, and does not "understand" Chinese. Searle posits that these lead directly to four conclusions:
Searle describes this version as "excessively crude." There has been considerable debate about whether this argument is indeed valid. These discussions center on the various ways in which the premises can be parsed. One can read premise 3 as saying that computer programs have syntactic but not semantic content, and so premises 2, 3 and 4 validly lead to conclusion 1. This leads to debate as to the origin of the semantic content of a computer program.
Brains cause minds.
Syntax is not sufficient for semantics.
Computer programs are entirely defined by their formal, or syntactical, structure.
Minds have mental contents; specifically, they have semantic contents.
No computer program by itself is sufficient to give a system a mind. Programs, in short, are not minds, and they are not by themselves sufficient for having minds.
The way that brain functions cause minds cannot be solely in virtue of running a computer program.
Anything else that caused minds would have to have causal powers at least equivalent to those of the brain.
The procedures of a computer program would not by themselves be sufficient to grant an artifact possession of mental states equivalent to those of a human; the artifact would require the capabilities and powers of a brain. Replies
Although the individual in the Chinese room does not understand Chinese, perhaps the person and the room, including the rule book, considered together as a system, do.
Searle's reply to this is that someone might in principle memorize the rule book; they would then be able to interact as if they understood Chinese, but would still just be following a set of rules, with no understanding of the significance of the symbols they are manipulating. This leads to the interesting problem of a person being able to converse fluently in Chinese without "knowing" Chinese. Such a person would face the formidable task of learning when to say certain things (and learning a huge number of rules for "getting by" in a conversation) without understanding what the words mean. To Searle, the two are still clearly separate.
In Consciousness Explained, Daniel C. Dennett does not portray them as separate. He offers an extension to the systems reply, which is basically that Searle's example is intended to mislead the imaginer. We are being asked to imagine a machine which would pass the Turing test simply by manipulating symbols in a look-up table. It is highly unlikely that such a crude system could pass the Turing test. Of course, critics of Dennett have countered that a computer program is simply a logical list of commands, which could of course be put into a book and followed - just as a computer could follow them. So, if any computer program could pass the Turing test, then a person with the same instructions could also "pass" the test, except MUCH more slowly.
If the system were extended to include the various necessary detection-systems to lead to consistently sensible responses, and were presumably re-written into a massive parallel system rather than serial Von Neumann architecture, it quickly becomes much less "obvious" that there's no conscious awareness going on. For the Chinese Room to pass the Turing Test, either the operator would have to be supported by vast numbers of equal minions, or else the amount of time given to produce an answer to even the most basic question would have to be absolutely enormous—many millions or perhaps even billions of years.
The point made by Dennett is that by imagining "Yes, it's conceivable for someone to use a look-up table to take input and give output and pass the Turing Test," we distort the complexities genuinely involved to such an extent that it does indeed seem "obvious" that this system would not be conscious. However, such a system is irrelevant. Any real system able to genuinely fulfill the necessary requirements would be so complex that it would not be at all "obvious" that it lacked a true understanding of Chinese. It would clearly need to weigh up concepts and formulate possible answers, then prune its options and so forth until it would either look like a slow and detailed analysis of the semantics of the input or else it would just behave entirely like any other speaker of Chinese. So, according to Dennet's version of the system reply, unless we're forced to "prove" that a billion Chinese speakers are all more than massive parallel networks simulating a Von Neumann machine for output, we'll have to accept that the Chinese Room is every bit as much a "true" Chinese speaker as any Chinese speaker alive.
The robot reply
Wikibooks: Consciousness Studies
John Searle (1980) "Minds, Brains and Programs" -- original draft from Behavioral and Brain Sciences
John Searle (1983). "Can Computers Think?" in David Chalmers (ed.), Philosophy of Mind: Classical and Contemporary Readings (Oxford, 2002), ISBN 0-19-514581-X, pp. 669-675.
John Searle (1984). Minds, Brains and Science: The 1984 Reith Lectures, Harvard University Press, hardcover: ISBN 0-67457631-4, paperback: ISBN 0-67457633-0
Stevan Harnad (2001) What's Wrong and Right About Searle's Chinese Room Argument in Bishop, M. and Preston, J., Eds. Essays on Searle's Chinese Room Argument. Oxford University Press.
Stevan Harnad (2005) Searle's Chinese Room Argument, in Encyclopedia of Philosophy. Macmillan.
Dissertation by Larry Stephen Hauser,
Searle's Chinese Box: Debunking the Chinese Room Argument. Larry Hauser. available at http://members.aol.com/lshauser2/chinabox.html
Stanford Encyclopedia of Philosophy on The Chinese Room Argument
Philosophical and analytic considerations in the Chinese Room thought experiment
Interview in which Searle discusses the Chinese Room
Understanding the Chinese Room (critical) from Zompist.com
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment