Briefing: Kant’s Ethics

Briefing, Feature, Philosophy, Press


  • Immanuel Kant (Koenigsberg/Kaliningrad 1724-1804)
  • “Deontological” ethics: Particular actions are good or bad in themselves, no matter what the consequences are. Example: Killing someone is always bad, as is lying or stealing. Being honest or helping others is always good.
  • This is opposed to consequentialism and utilitarianism, for example.


  • Actions that are always good are called “categorically” good.
  • An action is morally right, if:
    1. It has a good motivation. A motivation is good if one performs the action only out of a sense of duty, rather than for some gain that is expected for oneself.
    2. In addition to having a good motivation, an action must conform to two so-called “Categorical imperatives” (=”unconditional commands”). First: Everyone should be able to do the same without causing any logical problems. Second: Human beings must always be respected (treated as ends) rather than used for the purposes of another (treated as means only).
  • If an action fails any of these three tests, it is morally wrong.
Credit: Please don’t copy. If you link to the image or the text, please specify this website as the source.

Numbers refer to the picture:

  1. Good motivation is necessary but not sufficient for morally right action!
  2. ‘Categorical’: must be obeyed, unconditional; ‘Imperative’: order, command.
  3. ‘Maxim’: the principle behind an action.
  4. You simply can not will a maxim that is immoral. Doing so would lead to a contradiction (for perfect duties), or to a world you don’t want (imperfect duties). Morality is dictated by reason.
  5. ‘Humanity’: All rational and autonomous beings (for example including rational and autonomous aliens, but not animals!)
  6. You have to respect yourself in the same way as you respect others (sacrifice, suicide!)
  7. You can treat others as means, but not only as means (e.g. taxi drivers, waiters in restaurants, teachers, etc).
  8. At the same time you treat them as means, you must also treat them as ends.
  9. ‘End’: the target of an action. The ultimate reason for doing something.
Related:  Briefing: The Chinese Room argument


  • Kant is generally perceived as being too rigid about the idea that every action is either good or bad, regardless of consequences.
  • For example, lying can sometimes be morally right (“white lies”), and even indicated, for example to comfort someone who is dying. Pointless honesty does not seem to be the right choice in all cases. The same can be said of killing (for example, in euthanasia), or any other action.


  1. Being honest to one’s friends because one feels that this is the right thing to do:
    1. Passes the motivation test.
    2. Can be done by everybody without causing any problems or contradictions.
    3. Treats your friends with respect and “as ends,” rather than as instruments to your own ends (goals).
  2. Lying in order to not harm someone by telling them a hard truth:
    1. Passes the motivation test.
    2. Cannot be done by everybody. If everybody lied, then nobody would believe what they are told, because everybody would assume by default that everyone else lies to them. Therefore, lies would not work any more in such a society. Contradiction!
    3. Treats your friends as ends (kind of), so doesn’t fail the third test.
    4. Still, by failing the second test, the action can be classified as morally bad.

How to use in robot ethics

  1. For example, a self-driving car has the choice between killing one person and killing two people. According to Kant, the car cannot make a good choice in this case. Every human life has an absolute value, and one cannot add up these values.
  2. A self-driving truck has the choice between destroying a very expensive museum building, including the priceless art stored in it, or killing a homeless criminal who sleeps in the street to the side of the museum. For Kant, human beings have absolute value. To kill the homeless criminal would mean that we use him as a means only to save the museum, and this is immoral.
  3. A self-driving car has the choice of injuring lightly one person, or killing twenty priceless pedigree dogs on their way to an exhibition. For Kant, dogs are worthless morally, since they are not human beings. The light injury, on the other hand, cannot be accepted, as we would be treating the injured human as a means to save the dogs (immoral).
Related:  Cyc: Making computers with common sense

Where to go from here


  • Hey Andy, the chart gives a great introduction to Kant’s thinking.
    Because of the numbers in the brackets I assume you even made a list of references? Would you kindly share those with me?
    That would be great!
    All the best, Max

  • Thanks for reading, Max! Sorry, I had forgotten to add the notes. I put them in now. Please find them right under the picture. — Cheers!

Comments are closed.