Prolog: Programming in Logic

AI and Society, Feature, Philosophy-AI-Book, Technology
Source: pixabay.com

Symbolic and subsymbolic processing

In our series on AI technologies and their importance for society, we are now looking at an example of what is called the symbolic approach to artificial intelligence.

As opposed to so-called subsymbolic systems, symbolic AI tries to represent the things of the world inside the computer as symbols: variables in a programming language, propositions in a kind of logical calculus, and so on. A “thing,” let’s say a car, could be represented as a software object or a database record, and it would be described by a set of attributes, like: manufacturer, model, colour, engine type, weight, number of passengers, number of doors, and so on. All these different values that describe the car would be stored inside the computer as variables that hold a value: the symbol (=property) “manufacturer,” for instance, could hold the value “Toyota,” or “BMW,” or “Fiat.”

Contrast this for a moment with the way our brain works. If we open up a computer, we will find some particular memory location in which the value of the variable “manufacturer” is stored. The computer stores symbols, that is, variables and their values, directly. On the other hand, opening a human brain is unlikely to reveal a particular location where, say, the name “Toyota” is stored. It must be stored somewhere, but it doesn’t look like it is neatly tucked away between neurons 13776 and 13781 (we don’t really know how data storage works in the brain, though; so we might be mistaken about that). From what we know, it looks like information in the brain is not stored in the form of explicit symbols, but in the form of connections between brain cells (neurons). Remembering the name “Toyota” would mean that inside a group of neurons somewhere in one’s brain, there is a stronger connection between remembering the letter “T” and, immediately afterwards, associating it with the letter “O” (and so on). The whole word, in turn, gets recalled when another group of neurons (responsible for vision) recognises a particular shape in one’s field of vision: the shape of a car of that type. And so on. Storage of the word “Toyota” is therefore not isolated in the brain, as it is in a conventional computer. Instead, it is widely distributed across multiple neurons, and connected with various neuronal subsystems that are responsible for vision processing, letter and word recall, memory, even smell: if, as a child, one associated a smell of a particular car freshener with the family’s Toyota, then that smell is likely to cause a recall of the “Toyota” memory later in life. And this will happen even if no actual car is present anywhere near.

We can see how sometimes cognition is independent of symbols. For example, we might recognise the smell of a place. We might refer to it when we talk to others as “that smell, you know, of that particular place” (which is not actually describing anything, since we lack a good vocabulary for describing smells). So in this case, we are able to reliably recognise and identify a smell, but this processing is not symbolic: the recognition of the smell is not mediated through words and labels that we attach to the impression of the smell. Instead, we process the smell directly, as a smell, and this is presumably what a dog would also do when it recognises the smell of its owner without the use of an explicit description in words.

Related:  Do Chairs Think? AI's Three Kinds of Equivalence

Prolog

Let’s now go back to symbolic processing. This is the type of processing we associate with traditional programming languages, like C or Pascal, but also with formal logic, mathematics, and even everyday language: saying a word, for example “cup,” is an act of symbolic processing. Instead of actually dealing with a physical cup, I am processing a symbol for a cup in my mind: the word “cup.” (Read more about symbols here.)

Prolog, a programming language developed at the beginning of the 1970s, combines this symbolic approach with basic concepts from formal logic, to make it possible to program computers “declaratively.” Most common programming languages are imperative languages: they tell the computer exactly what to do and how to do it, step by step. For example, in order sell a product on an Internet website, the vendor must (1) display the product’s image and a button to buy it; (2) if the button is pressed, the product ID must be transferred into a shopping cart structure; (3) when the customer has finished shopping, he presses another button to check-out; (4) if the check-out button is pressed, the website displays the contents of the shopping cart with their respective prices and a button to complete the transaction; (5) when that button is pressed, the amount shown is deducted from the user’s credit card; (6) if he does not have a credit card on record, the form to enter the credit card details must be displayed; and so on.

Conversely, in a declarative language, the programmer would “declare” what relationships exist between the symbols (which, in turn, represent things in the world) and then leave it to the program to find a solution to the problem. For instance:

teacher( sandy ).
teacher( john ).
student( mary ).
student( peter ).
student( nick ).
student( mei ).

After entering these “facts” into the Prolog system, we could ask a question in form of a query:

teacher( X ).

The Prolog system would then try to match the variable X (note that it is a capital letter, which tells Prolog that this is a variable!) with the names given in the collection of facts. It would then give the answers:

X=sandy
X=john

But we can do more with Prolog. We could, for example, define a two-place teacherOf( teacher, student ) predicate, to record who is teaching whom:

teacherOf( john, mary ).
teacherOf( john, peter ).
teacherOf( sandy, nick ).
teacherOf( sandy, mei ).

This works as we would expect: teacherOf( X, mary ) would return X=john. But we can do more with that. We can now define a predicate “classmate” that uses the teacher predicate. The idea is that a classmate is someone with whom you have the same teacher:

classmate( X, Y ) :- student(X), student(Y), teacherOf( A, X ), teacherOf( A, Y ).

The “:-” means that the expression on the left will be true if the expressions on the right are true. You can just read it as “if”. The commas on the right side (“,”) express a logical “and”. That means that all the expressions on the right have to be true in order for the system to consider the predicate classmate to be true.

Related:  Aristotle's AI

Now we can ask:

classmate( X, peter ).

That means: Who (X) is classmate of Peter? The system answers:

X=mary
X=peter

Oops. Something strange happens here. What exactly? Well, if you look at the predicate classmate, it never says that one cannot be one’s own classmate! Since everyone has the same teacher as oneself, logically, everyone who is a student is always also their own classmate!

If we wanted to change that, we would have to exclude the case that someone is one’s own classmate, like this:

classmate( X, Y ) :- student(X), student(Y), teacherOf( A, X ), teacherOf( A, Y ), X\=Y.

The operator “\=” in Prolog means “not equal,” so that in this case it is additionally required that X is different from Y. Let’s see:

classmate( X, peter ).

Answer: X=mary, which is the expected answer.

We can also try:

classmate( X, sandy ).

which correctly answers false, meaning that no solution can be found, since Sandy is not a student but a teacher, and we explicitly limited the classmate predicate to students.

Syntax and meaning

Observe that the system does not know anything about teachers, or students, or Peter, Sandy, or Mary. The symbols “teacher,” “student,” and so on are just that: symbols, words that stand for something in the mind of the observer of the program, but that don’t mean anything to the program itself. The program just happily uses the same symbols the human operator used, and it is entirely up to the operator to project an suitable interpretation into those symbols!

For example:

Prolog: gugu(lala).
English interpretation: ?

Prolog: ? gugu(X).
English: “Which X is a gugu?”
Answer: X=lala.

Although this exchange is nonsensical, because “gugu” and “lala” are not words that have any meaning in our language, Prolog will treat them just like any other symbols, say “cat,” or “dog”. Because, of course, for Prolog “cat” and “dog” also have no meaning at all. They are exactly as meaningful to the system as “gugu” and “lala”. Words like “cat” and “dog” are only meaningful to the human operator, not to the system.

Related:  "Give us some rules to implement!"

Syntactic manipulation

This is at once the core feature and also the core problem of symbolic AI of this type: that the symbols it uses carry no meaning inside the computer. Their manipulation takes place not based on any meaning, but based only on syntactic, that is, positional properties of a symbol inside an expression. “gugu(X)” in Prolog will give X any value that appears in the database of facts inside the brackets for “gugu(),” without being at all bothered by the question what “gugu” might mean.

This is, in a sense, what happens inside the Chinese Room. The person inside the room will manipulate the Chinese characters according only to the way they look, and how they match the characters in his rulebook, without considering their meaning (because he doesn’t speak Chinese). In fact, the Chinese Room argument was inspired by Prolog-like systems of symbolic AI, and is meant directly as a criticism of the idea that such systems could ever achieve genuine understanding.

With this, we arrive at the central assumption of symbolic AI:

If the symbol manipulation preserves the original relations between the symbols, the mapping of symbol to meaning can be left to mind of the operator.

Or, as John Haugeland put it: “Take care of the syntax, and the semantics will take care of itself.”

It should perhaps be mentioned that not all symbolic AI has this problem. SHRDLU is an example of a symbolic AI system that is not affected by the Chinese Room criticism, since it does have a kind of “understanding” of what the symbols that it uses mean. Symbols like “a blue box” or “a red pyramid” represent particular objects that the system can recognise and manipulate, and thus they are not mere symbols without meaning, but meaningful representations of actual objects that the system can experience. In this case, we would say that SHRDLU’s symbols are “grounded,” which means that they have corresponding objects in the “real world” of the system (even if that “real world” is itself a simulation).