Aristotle’s AI

AI and Society, Feature, Philosophy, Philosophy-AI-Book

Aristotle is everywhere. Like a thread hanging inconspicuously out of a piece of clothing, which, when you start pulling at it, unravels the whole thing: so whenever one starts unravelling a philosophical argument, one ultimately ends up with either Plato or Aristotle.

Recently I heard a talk of a PhD student in philosophy. He was talking about his cutting-edge research in meta-metaphysics. I won’t pretend that I know what that is, but when the title alone has two “metas” in it, then it’s probably something pretty advanced and clever. And what was the first thing he said in his talk? That in modern meta-metaphysics you have essentially two camps: the neo-Quineans, and the neo-Aristotelians!

So if Aristotle pops up everywhere, then, of course, AI won’t be an exception. The man himself was born 384 BC, pretty precisely 2400 years ago. Computers, even in their most primitive digital forms, exist since 1941 (Konrad Zuse’s Z3), that is, for about 76 years. And these could barely add up two numbers, much less do anything remotely resembling AI. So how could possibly Aristotle have anything to do with Artificial Intelligence? Let’s see.

Laws of thought

For one, Aristotle gave us his “laws of thought.” Essentially, these are common-sense rules about how we think (their precise philosophical interpretation is a bit more involved, but need not concern us here):

  1. The Law of Contradiction: If P is a proposition (a sentence that can be true or false, like: “it is raining now”), then it is impossible that P and ‘not P’ are true at the same time. More simply stated, it cannot both be true that it rains and that it doesn’t rain (at the same place and time).
  2. The Law of Excluded Middle: If P is a proposition, then either P must be true, or ‘not P’ must be true. It cannot happen that they are both false, or both true.
  3. The Principle of Identity: Everything is identical with itself.

Together these laws were considered necessary for proper, logical thought, and they are still the basis for all digital (binary) computing:

  1. A bit that is ‘1’ cannot be ‘0’ at the same time (Law of Contradiction);
  2. For every bit, it must either be ‘1’ or ‘0’ (Law of Excluded Middle);
  3. A bit that is ‘1’ has the value ‘1’; or: every bit has the same value as itself (Principle of Identity).
Related:  5 ethics principles big data analysts must follow

Clearly, classic digital computing would make little sense without these assumptions. Note, though, that these do not necessarily apply to quantum computers or analog computing hardware, or that their interpretation in these contexts becomes more difficult.

Syllogisms and predicate logic

Aristotle also first systematically examined syllogisms. Aristotelian syllogisms are arguments of the form:

  1. All tigers are animals
  2. All animals have legs
  3. Conclusion: Therefore, all tigers have legs.

Such syllogisms can also go wrong. For example:

  1. All ships float on the water.
  2. Empty bottles float on the water.
  3. Conclusion: Therefore, empty bottles are ships.

There are various ways how we can represent such syllogisms more formally in order to see whether they are valid or not. For example, we can write them down in predicate logic, or in the form of Venn diagrams. But clearly, a machine that attempts to reason logically must employ such syllogisms in some form as part of its reasoning.

Substances

Another Aristotelian distinction is that between substances and predicates of substances. We can distinguish a particular dog (an “object” in object-oriented programming parlance), from the class it belongs to (“dogs”). And further we can attribute specific values to various properties of the dog: its colour is brown, its weight is four kilograms, and so on. These concepts underlie not only object-oriented programming, but also, for example, relational and object databases, and are often the basic organisational principle used in knowledge representation in (classic, symbolic) AI systems.

Impact

Even such a short look at Aristotle’s contributions to AI clearly shows a few interesting points:

  • There is an incredible match between binary logic (as derived from Aristotle’s Laws of Thought) and the engineering principles of electrical and electronic circuits. Electricity can flow or not flow, a switch can be on or off. And this exactly matches the binary nature of the true/false distinction for propositions in Aristotelian logic. So, naturally, Aristotelian logic came to underlie all digital (binary) computation.
  • From this come some of the best and also some of the worst consequences of digitalisation. In positive terms, digital binary logic allowed us to make machines that can calculate arithmetic and logical expressions with incredible speed and accuracy, making modern databases, flight control computers, insurance calculations and every other application of computing power possible.
  • On the negative side, though, this binarisation of the world led to a severe loss of nuance, that now affects our everyday lives: in forms, for example, one has to fill in one’s relationship status, and usually the options would include only: single, married, widowed. That this does not describe social reality was soon recognised, and modern social media profiles include a collection of more realistic relationship descriptors, like: “living separated,” or “it’s complicated.” Still, even five or ten such categories are still digital, in the sense that they can never capture the full complexity of human relations.
  • Another example would be restaurant ordering. In old times, one could tell the waiter what one wanted to eat, and add special requests on how the meal should be prepared and presented (the movie “When Harry met Sally” shows an amusing take on this). But with the advent of digital computers in order processing, restaurants cannot any more take such orders. Now the ideal order is by number (“meal 23”), and no options or modification requests are accepted, because there is no system in place that would be able to store and process such requests. As a result, the richness of human life and desire is drastically reduced, and to a great extent lost: life and human choice becomes more predictable, and original, deeply personal choices are discouraged or even made impossible in many contexts. As another example, see how difficult it has become to apply for a job with a non-standard CV. Either one’s CV fits into the standardised form fields, or one’s experience is useless and cannot be processed and appreciated. Human life, thus, is reduced to a set of standardised menu choices, and this could, arguably, be said to strip human life of its essential humanity. This too, then, is an effect of Aristotelian thought on life, as mediated through digital technology.
Related:  "Give us some rules to implement!"

Related Posts