Do Chairs Think? AI’s Three Kinds of Equivalence

Feature, Philosophy, Philosophy-AI-Book

Does it think?

When we discuss the philosophy of AI, one of the most common questions that pop up is this: Does this machine think? — And if it did, how would we know?

A similar question might be: Is this machine ‘intelligent’? And how can we judge the existence of ‘intelligence’ in an artefact?

These are all good questions. You can see that they are interesting, because I can, in principle, ask the same kind of question about a dog: “Does it think? Is it intelligent? And how would I know?” — Or about a chair: “Does it think? Is it intelligent? And how would I know?”

Are chairs intelligent?

We can probably agree that chairs (the old-fashioned, wooden type) are not intelligent. They don’t think at all. There’s no question about that. But how can we know for sure?

One way is to argue like this:

  • Thinking (and intelligence) requires a brain or perhaps a computer. In any case, a very complex information-processing internal structure.
  • Chairs don’t have such a structure. We can cut them open and see that they are just made of wood, and more wood. The wood is not structured anything like a computer or a brain.
  • Conclusion: Chairs cannot possibly be thinking.

This is not an entirely bad argument (nor is it very strong). We will call this the structural equivalence argument.

Another way would be to look at the behaviour of the chair. What does a chair do? It just sits there. You kick it, it doesn’t move away. You put fire to it, it doesn’t protest. You talk to it, it doesn’t answer. A dog, on the other hand, does display all sorts of interesting reactions: it comes when called, flees or fights when attacked, and runs away from fire. Humans have even more nuanced behavioural responses. So in this way, we could say that some thing X is intelligent if (and only if) the behaviour of X in various situations is similar to the behaviour of another thing Y that is known (or assumed) to be intelligent; a human, for example. So we compare the behaviour of a chair to a human and conclude that it is not intelligent at all. We compare the behaviour of a dog to a human and conclude that it is somewhat intelligent, but it lacks some of the more interesting behaviours (for example, to read a book, or create poetry). We’ll call this the behavioural equivalence argument.

This is a much more convincing argument, but it also has its problems. We’ll discuss them below.

Finally, there’s a third way we can go about judging the intelligence of chairs. We can think of ‘intelligence’ not as something rooted in material structure alone (like in the structural equivalence thesis); and also not as some purely behavioural attribute (like with the behavioural equivalence thesis); but we could instead try to distinguish ‘hardware’ from ‘software,’ the structural from the functional aspect of an artefact’s operation. A computer is the best example of such a distinction. You can have a program that adds two numbers on any number of different hardware platforms. Your mobile phone can add two numbers. Your desktop computer can. An abacus can, too. Or you can do it with pen and pencil. All these different ‘hardware’ implementations could be said to run the same ‘program’: the ‘software’ that actually performs the function of adding two numbers together. We will call this the functional equivalence thesis.

As opposed to a purely behavioural equivalence, the functional equivalence does take into account the abstract organisation of the hardware that performs the function we are examining. When I add two numbers, there are a number of functional units that are required:

  • I need to remember which numbers I am adding, and any intermediate results (a kind of memory).
  • I need some functional unit that can take two digits and perform the actual addition, keeping track of any carry-over numbers (a kind of processing unit).
  • I need some mechanism that controls the whole process and remembers which digits to add next, and where to put the result (a kind of operations controller).
  • I need some input and output unit: a way to enter the numbers to be added, and a way to see the result (an I/O interface).
Related:  Welcome to Moral Robots!

What makes this approach different form the structural equivalence is that now I can account for differences in structure. I can distinguish function from implementation, which is the way a function is physically realised in a particular piece of hardware. So I could recognise that a calculator, an abacus, and a human with a pencil and paper all have the same functional units described above, although these are implemented in very different ways in their respective hardware (electronics, wooden beads, neurons and muscles). Still, a functional description could easily identify that these devices are, despite their hardware differences, functionally equivalent.

Let’s now briefly see how we can judge these different approaches. Which is better? And what problems does each one have?

Structural equivalence

It’s easy to see the problems with structural equivalence:

  1. Who says that the chair’s wood is structurally dissimilar to a nervous system? Wood, under the microscope, is made up of cells, and in a sense that’s true of brain matter also. Marcroscopically, both wood and brain matter are just undifferentiated goo, one solid, one gelatinous. Just from looking at the wood’s structure, it would be very difficult to conclude that it cannot do what the nervous system can do.
  2. The whole argument ignores the possibility of having intelligence in a different structure. It would be like saying that a table has to be made of wood, and have four legs. Sure, this is true of many tables, but certainly not all. There are tables made of metal and plastic, and there are tables with three (and two, and one) legs. Binding the concept of ‘table’ to a particular structural basis would make it impossible to talk about metal or plastic tables. And this would certainly be a mistake. Why should the ‘table-ness’ of an object depend on the material it’s made of, or an incidental property like the number of its legs?
  3. This argument is also unhelpful for the discussion of the possibility of Artificial Intelligence, because it essentially answers the question: no AI can be possible as long as artefacts don’t have biological, human brains. This means that, strictly speaking, artificial intelligence is impossible by definition. It will always have to be biological, human intelligence, driven by a human brain. This would exclude, for example, that aliens (if they existed) might be intelligent, since, no matter what complex thoughts they might appear to have, they would surely not have a human brain (again, by definition; if they had one, they wouldn’t be aliens!)
  4. Then we should also consider that no two brains are identical. Even human brains show some variation between individuals. If one insists on strict structural equivalence, one should probably say that only the specific reference brain is optimally thinking. All other brains, being structurally not exactly the same, will necessarily have a lower intelligence, because the are, to some extent, removed from the ‘reference’ brain.
  5. And finally, structural equivalence as a test for intelligence is implausible, because this is simply not how we judge intelligence in our lives. When you ask yourself if someone is intelligent, or more or less intelligent than you, then you don’t go and dissect his brain to find out. You just look at them. If they can calculate faster, solve logic puzzles, and understand relativity theory, then you might admit that they are intelligent. So we never actually use structural equivalence to judge intelligence. Instead, we use behavioural criteria.

So let’s look at behavioural equivalence next.

Behavioural equivalence

Behavioural equivalence underlies the most famous test for artificial intelligence, the Turing Test: If a computer can chat in a way that is indistinguishable from chatting with another human, then the computer can be said to be intelligent (so the premise of the Turing Test). Sounds plausible. What could be wrong with that?

  1. For one, not all intelligence needs to express itself in the ability to do everyday chatter. We have already built many systems that show clearly complex and ‘intelligent’ behaviour, but which cannot perform everyday conversation: for example, self-driving cars, image recognition programs, chess- and go-playing programs, programs that diagnose cancer and heart diseases, and many more. None of these can talk. But are they therefore not intelligent?
  2. Then there is the observation that even clearly intelligent people sometimes fail to converse. For example, if a normal, intelligent person visits a foreign country whose language she doesn’t speak: then every attempt of the locals to chat with her will be unsuccessful. According to the premise of the Turing Test, the locals should be justified to conclude that this person is not intelligent. But this would be clearly wrong.
  3. Then, we know that a programmed character in a computer game can be made do perform any action and portray any feeling: pain, fear, love. Still, we clearly understand that these are programmed behaviours, and that the computer program does not actually understand or feel anything.
  4. This brings us to the final point: One of the most influential philosophical counter-arguments to the behavioural equivalence thesis, that has been given by John Searle (1932-). It is called the ‘Chinese Room’ argument.
Related:  Can Machines Think?

Imagine, the argument goes, that someone is locked inside a room. You can communicate with him only through little cards which you can push through a slot in the door. Now you are Chinese, and you push Chinese cards saying various things into the slot. And always a sensible reply comes back, as someone inside the room gives you back a card with the reply (in Chinese) through the door slot. You never see the other person, you only communicate through these cards. What you don’t know: the person inside the room does not speak any Chinese at all. He has only a big book, a kind of conversation dictionary, which tells him which card to give you back as a reply for each one of the cards that you can possible give him as an input through the slot. He looks up what you give him in his book, chooses the right reply card, and gives it back to you. Now the question is, does this man inside the room understand Chinese? By definition, no. We said that he didn’t. But obviously, he kind of seems to understand, since he can converse with you in Chinese.

What does this thought experiment prove? It gives us a counterargument to the behavioural equivalence thesis. Behaviourally, the Chinese room (and the person inside it) are indistinguishable from a Chinese speaker. But in reality we know that the person in the room does not understand any Chinese. So, the conclusion goes, behavioural equivalence does not mean that two systems are equivalent in any meaningful and relevant way. We can have behavioural equivalence of two systems, and still one of them can understand Chinese, and the other cannot. Applied to the Turing Test, I can have a machine that is perfectly able to converse with me, but this machine need not actually understand anything of what it is saying. Therefore, even if it was perfect in simulating a conversation, this would never be a real conversation, since the words don’t mean anything to the machine. (In the same way as the Chinese words don’t mean anything to the man inside the Chinese room). — Of course, there are many more arguments regarding the Chinese room. We will analyse the argument in detail in a later post.

Functional equivalence

This problem is what functional equivalence is trying to answer. A functionalist looking at the possibility that something (a chair, a Chinese room, an alien, a robot) is intelligent, would ask: does this candidate have a functional organisation that is likely to lead to intelligent, adaptive behaviour? So, although an MP3 player might be able to talk or make music, the functionalist would see that it lacks the functional units needed to autonomously generate speech. It is just playing a pre-recorded sound. It can also not be called a musical instrument, because it has no functional units that could be manipulated by a human player in order to generate music. It can, again, only play back a pre-recorded sound. The Chinese room is not intelligent because the functionalist could see that it does not contain the right functional units that are needed for intelligence: the Chinese room has no memory, no unit that associates symbols with meaning, no way to acquire meaning from experience.

On the other hand, functional equivalence does not require the equivalent things to be structurally similar. The wing of a bird is, for example, functionally equivalent to the wing of a butterfly, and both are (within limits) functionally equivalent to the wings of airplanes. But all three are structurally different, are made of different materials, and even work physically in different ways. Still, in the functional economy of a thing that flies, they perform similar functions, and thus could be called functionally equivalent.

Related:  Philosophy topics overview

The functionalist would be able to recognise, for example, that an alien is intelligent, even if his brain is made up of entirely different materials. He would see the possibility that a (suitably complex) computer might be intelligent.

Although functionalism seems to navigate nicely around the problems suffered by both behaviourism and structural equivalence, it does have its own issues. One problem is the so-called Chinese nation argument. (This has nothing to do with the Chinese room. The Chinese room was Chinese because Searle wanted an example of an incomprehensible language, at least to him and his students. The Chinese nation is Chinese because the Chinese are many. As we will see in a moment, you can’t well make this argument with the Greek or the Maltese nation).

The Chinese nation argument goes thus: Assume you have a brain that works well using one billion neurons (this is about a hundredth of what our brains have, but this doesn’t matter for the experiment. We can make the argument with any number). Every neuron in this brain is connected with other neurons. Signals travel through the brain when a neuron receives signals from other neurons on its ‘input’ side, and then produces a new signal on its ‘output’ side. This signal is then propagated as an input to other neurons. Now I could, in principle, take out any one neuron in the brain and put a human in there to play the role of that missing neuron. I’d give instructions to the human to act exactly like the neuron he replaces: when the input signals are such-and-such, he should initiate an output signal. Otherwise, he should not. I give him dials to show the input signals, and a button to produce an output signal that is connected to other neurons.

If this man does his job as advertised, the brain should continue to function exactly as before. Now I can replace more neurons with more people and dials and buttons. Every time, I replace one neuron by one person that acts exactly as the neuron did. Now you see why we need a Chinese nation to play neurons. What if I replace all one billion neurons with one billion Chinese people? Accepting the premises of the argument, I would expect nothing strange to happen. The brain should work just as before, although it now does not contain a single neuron. The brain should be able to converse, make jokes, play chess, and write love poems. But at the same time, none of the people who are involved in this brain converses, jokes, plays chess or writes poems. They only press a button that fires ‘their’ neuron. So how does this ‘magic’ ability of the brain to do these things come from? And where is the brain’s consciousness now stored?

The point of the argument is this: If functionalism is correct, then the Chinese nation brain should be identical in function to the original brain. It has exactly the same functional units that implement exactly the same ‘software’ (or behaviour). It should then also have consciousness, pain, emotions, and a sense of self. But where are these things? The sense of self of that Chinese nation brain is nowhere in the people that make it up. Has the brain’s sense of self then suddenly disappeared? According to functionalism, this could not be. So either functionalism is wrong, or we must attribute some mysterious sense of self, humour, emotions and consciousness to a collection of people who don’t have (as individuals) any of these states (they have their own, but not the original brain’s!).

Again, there are many possible answers to this as there are to the Chinese room argument. But these are for another post.

 

Leave a Reply