Defeasible logic

AI and Society, Feature, Philosophy, Philosophy-AI-Book

Limits of Aristotelian logic in real life

We talked about Aristotelian logic and how it is useful to AI in another post.

But two-valued logic (that recognises only true or false statements) cannot easily deal with natural language, where statements are not clearly “true” or “false,” but can have other values in-between, like: “often true,” or “almost always true.”


  • “If it’s winter, then the heating is on.”
  • All birds fly.

Are these always true?

Obviously not. There might be many reasons for the heating to be actually off:

  • Lack of money to pay the bill,
  • No fuel,
  • The heater is broken,
  • The occupant has gone out,
  • It’s a warm winter’s day and no heating is needed.

The same is the case with “all birds fly.” Of course, birds kind of fly, but chickens fly very badly, and penguins don’t fly at all. So the statement is not really true all the time, as Aristotelian logic would require.


Consider the statements:

  • Tweety is a bird.
  • All birds fly.
  • If something is a penguin, then it is a bird.
  • Tweety is a penguin.
  • Penguins don’t fly.

Does Tweety fly?

The answer should be no, because Tweety is a penguin, and these don’t fly. But if we see this from the point of view of a logic of true or false statements, then these premises are contradictory. If Tweety is a bird, and all birds fly, then he should fly. This would contradict the fact that penguins don’t fly although they are birds.

Related:  Philosophical issues: AI in Games (2017)

What is the root of the problem here? The point is that not really all birds fly. But so many of them do, that if we didn’t have this premise in a system that reasons about the world, then this system would miss too many of the true statements that apply to birds. Of course birds fly. Most of them. To understand almost all birds, except penguins, we really need to know that they fly.

“Defeasibly follows”

To solve the problem with Tweety, we would like to express the fact that some statements are only true “most of the time,” or “usually,” but not strictly always.

Defeasible rules are “reasons to believe.” They are general rules that will be true most of the time, but might also be false in particular situations.

In normal Prolog (which we talked about in another post, and which has two truth values only), we would say:

heating_on :- winter

meaning: the heating is on if it is winter. Remember that the notation “:-” expresses an “if.”


heating_on <strong>-<</strong> winter

Here, the symbol “-<” (often other symbols are used, for example a squiggly arrow, which is hard to reproduce here), means:

  • “If it is winter, then the heating will (usually) be on” ; or:
  • “If it is winter, there is reason to believe that the heating will be on.”

Defeasible assumptions

Now we can return to Tweety. To make better sense of it, I could express it using defeasible assumptions (marked with D:):

  • Tweety is a bird.
  • D: All birds fly.
  • If something is a penguin, then it is a bird.
  • Tweety is a penguin.
  • Penguins don’t fly.
  • Does Tweety fly?
Related:  "Give us some rules to implement!"

Here we are saying that there is reason to believe that all birds fly, but there can be exceptions to that rule. You can actually try this out. D-Prolog is a Prolog interpreter on the web that can handle defeasible assumptions.

Here is our facts base in D-Prolog (

Bird(tweety). Fly(X) <strong>-&lt;</strong> Bird(X). Bird(X) &lt;- Penguin(X). Penguin(tweety). ~Fly(X) &lt;- Penguin(X).

Now we can ask the query:

?- Fly(tweety)

And we get the correct answer: false.

Why do we need ‘defeasible’ logic?

Of course, it would be best if we could have a probabilistic logic, where every statement has an attached probability. In this case, we would do deductions by calculating the probability of the conclusion given the probabilities of the premises.

But this is difficult to do for various reasons:

  • In many cases, we don’t know the correct probabilities (how many birds are penguins? How many species of bird don’t fly?)
  • Humans are bad at estimating probabilities for everyday events.

Therefore, the concept of a ‘defeasible’ conclusion is a kind of intermediate, stop-gap solution. Instead of precisely calculating probabilities (which would be impossible in real life), or completely ignoring them and creating contradictory sets of premises, we take a middle road: A defeasible assumption is generally true, but not always true. The machine can treat it as an assumption, but if there are good reasons to assume that it’s false, then the machine can assume that it’s false without creating a contradiction. This gives us the advantages of both extremes (binary logic and probabilities), without their respective drawbacks.

Related Posts

Related:  Do Chairs Think? AI's Three Kinds of Equivalence

Leave a Reply