Briefing: Utilitarianism

Briefing, Feature, Philosophy, Press

Summary

  • Jeremy Bentham (1748-1832)
  • John Stuart Mill (1806-1873)
  • Consequentialist theories would look only at the consequences of an action to judge whether the action is right or wrong. A consequentialist would say that an action is morally right if its
    consequences lead to a situation that is clearly better than things
    were before the action.
  • Utilitarianism is one particular form of consequentialism, in
    which the “good consequence” is considered to be the one that
    maximizes happiness (or: welfare, benefit) for all people concerned.

Theory

  • No actions are good or bad in themselves. This is different from Kant.
  • The moral goodness or badness of an action depends only on its consequences. This is opposed to deontological ethics.
  • In Bentham’s utilitarianism, an action is good if it maximizes utility.
  • Utility is, roughly, the greatest amount of happiness for the greatest number of people.
  • Bentham and Mill tried to say how we can actually quantify pleasure, but this doesn’t seem to work well for various reasons.
  • Mill: Some pleasures are of a higher, more worthy kind than others: “It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied.”
  • Utilitarianism considers the welfare not only of human beings, but of any being that can feel happiness and pain.
  • It would therefore consider the welfare of animals, and compare animal pain to human benefit (for example in the case of medical experiments on animals, or the morality of meat eating).
Credit: moral-robots.com. Please don’t copy. If you link to the image or the text, please specify this website as the source.

Variants

In the 20th century, many different variants of utilitarianism emerged. Some are:

  • Ideal utilitarianism (G.E Moore): Not only happiness counts, but other values too (knowledge, love, enjoyment of beauty).
  • Act and rule utilitarianism: Instead of calculating every single action (act util.), we should follow a rule that is known to maximise utility (rule util). Advantage: no calculations necessary, more efficient.
  • Preference utilitarianism (Harsanyi, Hare, Singer): In deciding what is good and what is bad for a given individual, the ultimate criterion must be his own wants and his own preferences.
    • Manifest Preferences: preferences manifested by observed behaviour, including preferences possibly based on erroneous factual beliefs, or on careless logical analysis, or on strong emotions that at the moment greatly hinder rational choice. These should not be considered by preference utilitarianism!
    • True preferences: Based on true information, correctly reasoned, in rational state of mind. These are the preferences that should be considered.
    • We must also exclude immoral and antisocial preferences! (E.g. the preference to kill someone whom we hate).
  • Negative Utilitarianism (Popper): Instead of maximising pleasure, we should minimise pain.
  • Motive utilitarianism: Robert M. Adams: Select motives and dispositions according to their general felicific effects, and those motives and dispositions then dictate our choices of actions. (Rather than evaluating actions directly).
Related:  Alan Turing (1912-1954)

It is not entirely clear and agreed which of these is the best (or most useful, or most sensible) interpretation of utilitarianism.

Preference utilitarianism has been explored mathematically to some detail, but for a robot implementation the main problem is that the actual preferences of people are generally not known to the robot.

Criticism

  • Utilitarianism is generally criticised because it considers only the results of actions, not their motivation or other factors.
  • For example, one might promise a forgetful, elderly relative to visit him in hospital. Before the visitation time arrives, a group of friends ask the person to come to a party. Since going to the party will maximise happiness (the old relative might have forgotten about the promise already, and the party-goers are more people), it would not be morally wrong to disregard one’s promise.
  • As a more extreme example, if killing someone whom everyone hates would increase the total happiness in the universe, then this might be a morally right action.
  • Similarly, slavery, murder, lying, theft, and any other action can be justified under the right circumstances, provided that is maximises happiness for the greatest number.
  • Utilitarianism would not recognise any human rights that have to be respected independently of their influence on the sum of a society’s happiness.
  • Particularly for robot ethics, the big problem seems to be that the robot generally will not have the required knowledge about:
    1. The future effects of its actions,
    2. general human preferences,
    3. the particular preferences of the people involved,
    4. and the relative importance of sets of preferences when compared to each other (preference for life vs preference for an ice-cream).
Related:  Life-and-death thought experiments are correctly unsolvable

Examples

  1. Being honest to one’s friends is morally right because it maximises happiness for all.
  2. Lying can be morally right under the right circumstances (for instance, a “white” lie).
  3. Democracy and human rights are good, not because they are valuable in themselves, but because people tend to be happier having them.
  4. It can be morally right for a car to kill one person in order to avoid killing two. It might even be morally right to kill a criminal in order to save the life of a more “valuable” member of society (say, a scientist).
  5. There is no good reason to assume that all humans are equally valuable. They are valuable to the extent that they contribute to society’s welfare.

How to use in robot ethics

  1. For example, a self-driving car has the choice between killing one person and killing two people. According to utilitarianism, the car should kill the one person, all other things being equal.
  2. If the one person is more valuable for society than the other, the car should kill the less valuable one.
  3. If the sum of happiness is maximised, then the car should not hesitate to kill its owner rather than a stranger on the street.
  4. A self-driving truck has the choice between destroying a very expensive museum building, including the priceless art stored in it, or killing a homeless criminal who sleeps in the street to the side of the museum. For utilitarianism, human beings have no absolute value. To kill the homeless criminal would be morally right if it maximises happiness or welfare for the whole of society.
  5. A self-driving car has the choice of injuring lightly one person, or killing twenty priceless pedigree dogs on their way to an exhibition. For utilitarianism, the dogs might be relevant, depending on who would profit from their being alive, and how much. The light injury, on the other hand, represents a small amount of harm, and might be the preferable outcome.
Related:  Prolog: Programming in Logic

Where to go from here