GOFAI, Eliza, and SHRDLU: Symbols and Meaning

AI and Society, Feature, Philosophy, Philosophy-AI-Book

If we want to divide the field of AI and its tradition into two big camps, the most obvious distinction would be that between symbolic and non-symbolic (or sometimes: subsymbolic) AI.

We will leave aside ancient, mythical, and magical approaches, and jump right into the modern times. Symbolic AI has historically been the first type to be explored. In 1966, Joseph Weizenbaum programmed “Eliza,” the first chatbot. Eliza could hold a very limited conversation with a user, who took on the role of a patient in a kind of psychological counselling session. This went something like:

  • Patient: I feel unhappy.
  • Eliza: Why do you feel unhappy?
  • Patient: It may have to do with my mother.
  • Tell me more about your mother.

… and so on. What the program actually did was just to rephrase the user input, by changing the order of words according to the rules of English, in order to change the user’s statements into questions (first reply in the example above). Another trick was to react to specific words (like “mother,” “husband,” “family,” and similar) and to prompt the user to say more about them.

This kind of conversational program has since been endlessly copied and improved upon, and the underlying principle is still the same in many modern chatbots (e.g. the AIML-driven ALICE, which you can try out here: http://alicebot.org).

Of course, Eliza and Alice don’t “understand” anything at all, and this is the core of the Chinese Room argument. If a machine just transforms sentences following the rules of syntax, but without having any internal representation of what the words mean, we cannot say that this machine understands. And if it doesn’t understand, how can it be “intelligent”? There are answers to the Chinese Room argument, and we will talk about them in another post.

The thing to see here is that programs of this type represent the world internally as a collection of symbols. A symbol, in this sense, is a physical token that represents another thing. This “physical token” can be anything: ink on a piece of paper, scratches on a wooden surface, sound waves, pixels on a computer screen, or bits inside a computer’s memory chip.

Related:  Amazon Lex Is Now Available To Build Chatbot Apps

So here we have a word: “CAT.” This word is, for you, a series of dots on your screen. If you print this page out, it will be a bit of printer toner on a piece of paper, forming the letters C, A, T. But the important thing for the meaning of the word are not these marks themselves. It is what the symbol stands for, what it refers to: an animal which we know, and which we call a cat.

Symbols, in this sense, are different from pictures. A picture of a cat also points to a cat, but in a different way: an image resembles the thing it pictures. The picture of a cat looks like a cat. A symbol, on the other hand, does not look anything like the thing it represents: The letters CAT don’t look like a cat. If you didn’t speak English, you wouldn’t be able to say what they stand for. Similarly, you can’t tell what “” means, just by looking at the character.

This is true of all symbols:

If you hadn’t learned the meaning of these, you would have a hard time guessing from the shape of an airplane that the symbol means “airport,” since an airport certainly does not look like an airplane. Similarly, the simplified shapes of two humans wearing different types of clothing do not at all suggest the meaning of the sign. It could equally well mean “mind the wall between people,” or “don’t communicate with each other,” or “stop and stand side by side.”

So the point of symbolic AI is that it represents all things as symbols inside the program. Even when the program does understand what the symbols mean, it still represents the things of the world that it knows about as symbols. A good example of this, and one of the first AI programs to actually “understand” something, was Terry Winograd’s SHRDLU (around 1970).

Related:  The Non-Technical Guide to Machine Learning & Artificial Intelligence

SHRDLU simulated a toy world of building blocks, and could virtually manipulate these blocks by following natural language commands:

  • Person: PICK UP A BIG RED BLOCK.
  • Computer: OK. (does it)
  • Person: GRASP THE PYRAMID.
  • Computer: I DON’T UNDERSTAND WHICH PYRAMID YOU MEAN.
  • Person: FIND A BLOCK WHICH IS TALLER THAN THE ONE YOU ARE HOLDING AND PUT IT INTO THE BOX.
  • Computer: BY “IT”, I ASSUME YOU MEAN THE BLOCK WHICH IS TALLER THAN THE ONE I AM HOLDING.
  • Computer: OK. (does it)
  • Person: WHAT DOES THE BOX CONTAIN?
  • Computer: THE BLUE PYRAMID AND THE BLUE BLOCK. (From: http://hci.stanford.edu/winograd/shrdlu)

In this case, clearly, the programs shows some understanding. The words represent things in the computer’s world, things that have particular properties and that the computer knows about and can manipulate. But this is also a symbolic AI program, because the program represents the box and the pyramid internally with the words “box” and “pyramid,” or a variable called BOX and PYRAMID (or B and P). It doesn’t matter what symbols are actually used, we want to note just the fact that boxes and pyramids are represented symbolically (with the use of letters and digits) inside the program; rather than, say, using a picture representation to refer to the box and to the pyramid.

Using such symbols, one can build symbol systems. These are formal systems that allow the symbols to be manipulated according to specific rules.

An example for a symbol system would be the game of chess. The pieces are symbols for particular movement rules, and although they do look a little like physical things (horses, towers), they don’t actually represent horses and towers (so they are not pictures). They represent particular abilities to act inside the game world. There are many chess games that change the look of the pieces to resemble science fiction or fantasy figures, or that represent them completely abstractly (for example by letters: P, B, K, etc). Still, all these representations refer to the same pieces and are played following the same rules. From this we can see that the pieces are symbols, not images.

Related:  EU debates legal status of robots

In addition to the symbols, we also need rules for manipulating the symbols. Chess gives us a set of such rules (the rules of chess). Using the symbols and the rules, I can now create complex expressions using these symbols: the legal board positions that occur throughout a game. It is funny to see chess as a symbol system: in effect, this means that any particular board position in chess is an expression, for which the game up to that point is a proof that this expression can be derived from the starting position (the initial expression) following the rules of the game: a chess game is nothing but a kind of mathematical proof.

Other examples of symbolic systems would be formal logic, for example the predicate calculus. Or human languages, where the words (symbols for things) are manipulated according to the rules of grammar and syntax to produce complex expressions (correct sentences).

So every symbol system has the following parts:

  • Symbols that represent things in the mind of the observer.
  • Expressions built up from these symbols.
  • Processes for manipulating these symbols according to rules.

And symbolic AI, or, as it also known, GOFAI (Good Old Fashioned AI) is the research project based on the following three assumptions:

  1. The mind is a symbol system.
  2. Cognition is symbol manipulation.
  3. Complex behaviour can be created by symbol manipulation alone.

We will talk more about possible criticisms of this project in future posts. Stay tuned!