Briefing: The Chinese Room argument

AI and Society, Briefing, Philosophy, Philosophy-AI-Book

The Chinese Room argument is an argument against the thesis that a machine that can pass a Turing Test can be considered intelligent. It was presented in a 1980 paper by American philosopher John Searle.

Imagine, the argument goes, that someone is locked inside a room. You can communicate with him only through little cards which you can push through a slot in the door. Now you are Chinese, and you push Chinese cards saying various things into the slot. And always a sensible reply comes back, as someone inside the room gives you back a card with the reply (in Chinese) through the door slot. You never see the other person, you only communicate through these cards. What you don’t know: the person inside the room does not speak any Chinese at all. He has only a big book, a kind of conversation dictionary, which tells him which card to give you back as a reply for each one of the cards that you can possible give him as an input through the slot. He looks up what you give him in his book, chooses the right reply card, and gives it back to you. Now the question is, does this man inside the room understand Chinese? By definition, no. We said that he didn’t. But obviously, he kind of seems to understand, since he can converse with you in Chinese.

What does this thought experiment prove? It gives us a counterargument to the behavioural equivalence thesis. Behaviourally, the Chinese room (and the person inside it) are indistinguishable from a Chinese speaker. But in reality we know that the person in the room does not understand any Chinese. So, the conclusion goes, behavioural equivalence does not mean that two systems are equivalent in any meaningful and relevant way. We can have behavioural equivalence of two systems, and still one of them can understand Chinese, and the other cannot. Applied to the Turing Test, I can have a machine that is perfectly able to converse with me, but this machine need not actually understand anything of what it is saying. Therefore, even if it was perfect in simulating a conversation, this would never be a real conversation, since the words don’t mean anything to the machine. (In the same way as the Chinese words don’t mean anything to the man inside the Chinese room). — Of course, there are many more arguments regarding the Chinese room. We will analyse the argument in detail in a later post.

Related:  Life-and-death thought experiments are correctly unsolvable

Related Posts