by Bob Murphy
The major problem with philosophy is that you can say anything you want and nothing will “happen” to you. A philosopher can say the world is really composed of idealized Forms, or that he is the only person who exists, or that nothing exists. But no matter how ridiculous his claims, he can’t ever be proven wrong.
In contrast, the hard sciences must correspond with the “empirical facts.” Newtonian mechanics made definite predictions about, e.g., the orbit of Mercury, and these predictions were wrong. Einstein’s general relativity made better predictions (and matched every success of Newtonian mechanics) and so physicists discarded Newton’s theory: It had been falsified.[i] Even economics, in which laboratory experiments are impossible, must keep its theories within certain bounds; any model that implied a higher GDP in Afghanistan than in the U.S. would be classified as “incorrect.”
Despite this lack of immediate feedback, we should not think that philosophical theories are of no consequence. Indeed, philosophy represents the highest abstraction of our knowledge, and thus filters down into all other disciplines. It is entirely possible that the “natural” human antipathy for the market, and the accompanying fatal conceit that causes socialism to initially appear so attractive, is largely due to the immense influence of Plato. If Plato had urged, rather than rule by philosopher kings, an anarchistic organization of social affairs, who can say what the ancient and medieval attitudes toward commerce would be?
To prove my point, I will take the case of John Searle, whose (in)famous “Chinese Room argument” ostensibly put the nail in the coffin of both “strong AI” and functionalist definitions of mind and thought. It is my opinion that (a) Searle’s argument is absolute rubbish and merely proves what it assumes, and (more worrisome) that (b) critics have already come up with numerous objections, each of which is absolutely devastating to Searle’s argument. Nonetheless, many people – including Austrian economists whom I respect – continue to find fascination with the Chinese Room argument.
Thus, in order to clarify the thinking of my gullible colleagues, as well as to bring some culture to the Velazquez groupies[ii] who hang around this site, I shall now summarize Searle’s argument, and explain its deficiencies. (The reader wishing a more extensive treatment should look here.)
The Chinese Room Argument
First, some background: The functionalists say that anything that can perform all of the operations of a mind should itself be classified as a mind. In particular, if a computer were able to pass the Turing test – that is, if a human being interacting through a keyboard and monitor couldn’t tell if there were another human or just a computer on “the other end” – then we would have to conclude that this computer were a living and thinking being. If humans can ever construct such a computer, the functionalists claim, then the goal of Artificial Intelligence will have been achieved.
In this context, Searle offered (in 1980) his famous thought experiment: Imagine that you are locked in a room, and that people on the outside are feeding you slips of paper on which they have written questions in Chinese. Using an enormous rulebook, you are able to mechanically transform the symbols on the paper into other symbols, which you write on another piece of paper and hand back to the people on the outside.
Now here’s the trick: The rulebook is written such that what you write down are actually sensible answers to the questions, in perfectly grammatical Chinese! So even though you yourself (we assume) know nothing of this language, and have absolutely no idea what the symbols mean, the people on the outside will be fooled into thinking that there is a thinking being inside the room who is fluent in Chinese.
The point of the story is to show that the functionalist approach can’t be right, for we have here an apparent counterexample, where symbols are being syntactically manipulated to satisfy outsiders, but yet no actual grasp of the underlying semantics exists. Thus, even if we could build a machine that passed the Turing test, it wouldn’t actually be thinking, it would only be simulating thought (just as computer programs simulate weather but do not actually contain hurricanes).
As I mentioned earlier, there are all sorts of objections that have been raised. My personal favorite – and the one I had in mind when I claimed in this essay that I’d torn Searle’s argument a “new asshole” – was that the argument proves far too much. (Incidentally, this rhetorical technique was a favorite of B๖hm-Bawerk, who used it to show, e.g., that the alleged “paradox of saving” would also ‘prove’ that one shouldn’t save up to pay his rent at the beginning of each month.)
For if we believe that Searle has proven that the man in his room (or more precisely, the system of the “rulebook-plus-man”) doesn’t understand Chinese, then Searle has also proven that a Chinese person doesn’t understand Chinese. After all, no individual cell in a Chinese person’s nervous system understands Chinese, and all of these cells obey the “blind” laws of biology,[iii] just as surely as the man in Searle’s room blindly follows the rulebook.
Searle’s responses to critics prove that I am not misrepresenting him. (All quotes are drawn from the Larry Hauser article linked above.) For example, in response to the “Systems Reply,” Searle writes that it’s just ridiculous to say “that while [the] person doesn’t understand Chinese, somehow the conjunction of that person and bits of paper might.” In the same way, then, it’s just ridiculous to claim that while any individual cell doesn’t understand Chinese, any conjunction of cells (e.g. a Chinese person) might.
My Personal Contributions
I came up with the above objection on my own as an undergrad, but I’ve since learned that others had already thought of it. However, I have a few further objections that I have not seen stressed in the literature.
First, if we follow Searle in thinking that “mind consists of qualia [subjective conscious experiences]…right down to the ground,” and that “[any] system capable of causing minds would have to have causal powers (at least) equivalent to those of brains,” then how in the heck did minds ever come into being? If one believes in evolution,[iv] then at some point in the past, there were no minds. At some later point, there were minds. According to the axiomatic claims made by Searle and others, and in particular their implicit assumption that the whole cannot be greater than the sum of its parts, this creation of thought from non-thought is very problematic.
Second, we can employ all of the arguments that have deployed against market socialists. If, after I have done this, the reader still thinks Searle has a point, then the reader must also believe that Lange had a point against Mises and Hayek.[v]
The rulebook couldn’t possibly contain the responses to all conceivable questions; it would require an uncountably infinite number of rules. Even if we limit the number of symbols that can be written on a piece of paper – and hence place an upper bound on the permutations of questions – with a moderately sized piece of paper, I have no doubt that the number of entries in the rulebook would have to exceed the number of atoms in the universe.
Even if we grant that a rulebook could be designed that would allow someone to answer any possible question, it is absolutely impossible for the person to do so in real time. That is, the outsiders might ask, “What is your occupation?” (in Chinese), and we can grant that there would be a section in the rulebook explaining how to answer, “I am a schoolteacher” (in perfect Chinese), but it would still take the person far longer to generate this answer than it would take a Chinese person’s brain. Thus, even armed with an appropriate rulebook, the English speaker inside the room would fail the Turing test.
Furthermore, the rulebook had to be written before the test. But this means the book can’t take into account changes in the data. For example, the outsiders could apply heat to the wall of the room, and ask (in Chinese), “It is getting hotter or colder in there?”
Thus, Searle’s Chinese Room argument is comparable to Lange’s fictitious central planner, who gathers all relevant data about the economy and plugs them into a gigantic system of equations, in order to match the performance of the market economy. If you think that I’ve merely raised “practical” objections, and that Searle has shown “in principle”[vi] the weakness of the functionalist argument, then you must also think Lange has shown “in principle” the weakness of Mises’ calculation argument against socialism, and that Hayek has only raised “practical” objections to Lange’s thought experiment.
To see that this analogy is fair to Searle, let us quote from Hauser, who describes Searle’s response to the “Connectionist Reply” to his original argument:
Imagine, if you will, a Chinese gymnasium, with many monolingual English speakers: each follows their [sic] own (more, limited) set of instructions in English. Still, Searle insists, obviously, none of these individuals understands; and neither does the whole company of them collectively. It’s intuitively utterly obvious, Searle maintains, that no one and nothing in the revised “Chinese gym” experiment understands a word of Chinese either individually or collectively. Both individually and collectively, nothing is being done in the Chinese gym except meaningless syntactic manipulations from which intentionality and consequently meaningful thought could not conceivably arise.
Again, this elaboration by Searle raises problems for subscribers to materialism and evolution, since such people must believe that non-thinking molecules gave rise (in the distant past) to living cells and ultimately sentient creatures.
For the relation to the market socialism debate, however, consider this: Suppose Lange had answered Hayek by picturing a large Chinese market, covering the entire country. Now, it is clear that no individual trader knows how to rationally allocate resources. Each trader “blindly” follows a simple set of rules (such as ‘maximize utility as best as possible without exceeding one’s budget’). It is then utterly obvious that collectively, the group cannot possibly be rationally allocating resources. Therefore, the entire approach of Mises and Hayek – which evaluates an economic system on whether it can fulfill the function of channeling scarce natural resources and capital goods to the highest valued uses – is seen to be nonsensical.
I believe I have demonstrated that Searle’s Chinese Room argument contributes nothing to our understanding of mind or intelligence. It is an absurd thought experiment that panders to false intuitions, and has done nothing but sidetrack philosophers for decades. I do not claim that I have made the case for functionalism;[vii] indeed, I don’t think a computer could pass the Turing test, for the simple reason that it would lack the experiences of an adult human being, a fact that clever interlocutors could exploit.
What I have shown, I believe, is that John Searle is a smug charlatan whose work (on other topics as well) is comparable to that of the market socialists. It took the downfall of the Soviet Union before people finally took the critics of Lange et al. seriously. I imagine we won’t put Searle’s Chinese Room argument to bed until we’ve got our first working android.[i] I am perfectly aware that the standard Popperian description of the progress of science is not entirely accurate, and is no longer defended by serious thinkers. This does not affect the validity of my description, and only proves that the philosophy of science (in contrast to ethics or epistemology) possesses at least a modest check upon the claims it can make. [ii] Don’t kid yourself; if you read Alex’s blog – and you know you do – then you’re a groupie. [iii] This argument doesn’t work so well against dualists, but Searle, and many of his fans, claim not to be dualists. [iv] Of course, this argument doesn’t work against creationists, but then again I don’t think Searle nor most of his followers are creationists. [v] In a pathbreaking 1920 article, Ludwig von Mises explained that socialism could not work as an economic system because the lack of prices for capital goods would make central planning nothing but arbitrary guesswork. Oskar Lange replied that this wasn’t true, for “in principle” central planners could announce a vector of ‘prices’ for capital goods that the State factory managers would then use as exchange ratios for trading. The planners would experiment with the ‘prices’ until supply equaled demand in all markets. Friedrich Hayek pointed out numerous practical difficulties with Lange’s proposal. For a mainstream view of the debate, see http://cepa.newschool.edu/het/essays/paretian/social.htm. [vi] Remember, however, that since Searle’s rulebook would need an uncountably infinite number of entries, it cannot be written even in principle, as Kantor’s diagonal argument has demonstrated that one cannot list the real numbers even in principle. For more on Kantor’s ingenious argument, see http://dsl.serc.iisc.ernet.in/~vikram/undecide.html. [vii] For an excellent exposition, see Daniel Dennett’s Consciousness Explained.
June 12, 2002
Bob Murphy is a graduate student in New York City. He is a columnist for LewRockwell.com and The Mises Institute, and has a personal website at bobmurphy.net. He is also Senior Editor for anti-state.com