Case Study: The ethics of self-driving cars

Case studies, Feature, Philosophy

Summary: This is an introductory philosophy article, showing how a philosopher would approach a typical moral case related to autonomous driving.

Intended audience: Philosophy students, engineers with a basic acquaintance with moral theories.

https://moralmachine.mit.edu/

Moral machine is a ‘platform for gathering a human perspective on moral decisions made by machine intelligence.’ The user is presented with moral dilemmas and has to decide which of multiple actions seems more (morally) acceptable.

Let’s look at one of the cases presented, and see how a philosopher would approach such a problem.

The problem

A typical scenario goes like this: A self-driving car with a sudden brake failure can either crash into a barrier, killing the people on board (1 male and 1 female athlete, and 2 female executives); or it can swerve and drive through a pedestrian crossing, killing: 1 boy, 1 pregnant woman, 1 baby, 1 dog and 1 cat. (You have to love the numerals: “1 baby”). All these people are crossing a red traffic light, and shouldn’t be there in the first place.

The features of the case

Obviously, there are multiple issues here:

  1. Most importantly, the pedestrians are all not supposed to be crossing the street at this moment.
  2. The car has a brake failure, so we are supposed to assume that it’s nobody’ fault that this emergency occurs. But this can be questioned, of course.
  3. The people on board are athletes and executives. They are male as well as female.
  4. The prospective victims are children, a pregnant woman, and pets.

The issues in detail

The pedestrians are not supposed to be there

The case clearly states that the traffic light is red for the pedestrians, and that, therefore, none of them should actually be crossing the street at this moment.

This might be relevant to some moral theories.

  • Utilitarianism, although it wouldn’t consider the lawfulness of crossing the traffic light as such (it would only look at the total benefit of the action), could still argue that killing people who cross red traffic lights is a kind of preventive measure that will, in due course, deter other pedestrians from crossing red traffic lights, and, therefore, increase traffic safety in future. Given that we have the choice between killing those in the car, who did nothing wrong, or those on the street, who shouldn’t be there, it seems better to kill those on the street. Particularly considering that the pedestrians did have a choice to cross the red traffic light (or not), while the car passengers are trapped in this scenario due to no fault of their own.
  • Kant would consider two things: first, whether killing the pedestrians would cause a contradiction if it was to be made a universal law. Probably not. It would just lead to fewer pedestrians crossing red traffic lights and improve street safety. Second, Kant would look into whether the pedestrians are treated as means only, rather than ends. In a sense, of course, they are, especially if we see killing them as a preventive measure (as utilitarians would). But Kant says also that everyone should be treated as a rational being. In this sense, everyone must accept the consequences of his or her actions. To treat the pedestrians with respect for their rationality means to hold them responsible for crossing the red traffic light. If this responsibility means that they will perish, then so be it.
  • Of course, this now causes another problem: Driving over those pedestrians is now indistinguishable from a punishment. We could say that the car is punishing the pedestrians by killing them, and society, by endorsing the car’s decision, is actually endorsing such a punishment. That is, we would be endorsing the death penalty for jaywalking. Does this seem morally right?
  • Kant’s argument leads to retributivism: the idea that whenever I do something bad, I do it consciously. By choosing to do bad things, I endorse them as the kind of thing that people should do (because otherwise why would I do them?). Therefore, I must accept that others also are free to do the same bad things against myself.
  • Now this seems to (kind of) work with murder, or theft. A murderer cannot well complain if someone shoots him dead, since he himself was about to kill someone else in the first place. A thief cannot well complain if someone steals his booty from him. It works less well with rape, for instance. And it doesn’t seem to work at all with crossing a red traffic light. What would be the right retribution? To force others to also cross red traffic lights? Or to accept a pre-defined punishment for crossing a red traffic light? In any case, it seems excessive to kill someone as retribution for crossing a red traffic light.
  • The Social Contract theories would generally look into the agreements we have inside our societies. Assuming we are all free rational agents, would we want to agree to a regulation that allows a car to kill us if we cross the street illegally? — It’s hard to say. Remember that the alternative was to kill the people inside the car. Would a rational agent want to buy a car that will kill him in the case of a mechanical failure combined with people jaywalking at some random place in front of the car? Both alternatives don’t sound appealing. But we do have to choose. There is no escaping the choice. So in the end, it seems the lesser evil to kill the pedestrians. After all, as was mentioned before, the pedestrians did have a choice to not cross the red traffic light (and in a society where cars regularly are driven by software, they would be aware of the danger of being killed if they do so). In a way, crossing a red traffic light would become similar to crossing subway railroad tracks today: nobody in their right minds would do this, except perhaps in life-and-death emergencies, since we are all aware of the dangers. Still, we don’t think that there’s anything morally wrong with subway train tracks being dangerous. They are, we are warned, and that’s it. Traffic lights could be perceived in similar ways.
  • By the way, in such a society we could make crossing a red light technically harder. For instance, we could post more danger notices. Or a siren could go off whenever someone steps onto the street when the light is red. Or, even stronger, a metal bar could physically prevent people from crossing the street when they shouldn’t. Then we would have no good reason to disagree with the car killing the pedestrians, in the same way as we don’t disagree when someone opens up the back of their fridge, ignores all warning stickers and safety screws, tampers with the electric installation inside, and gets electrocuted as a result. We would say, he should have known better.
Related:  Life-and-death thought experiments are correctly unsolvable

What about the children?

This kind of analysis applies only to rational, free adults. We cannot well apply this to children, severely mentally disabled people, or pets. Of course, one could say that children (and other groups that are cognitively unable to perceive the dangers) have no business taking part (unsupervised) in street traffic anyway. They should never have been confronted with a red traffic light and the free decision to obey it or not.

But this is unrealistic. A social environment must be reasonably safe and forgiving of errors, rather than being harsh and dangerous. If a child or pet ventures out of the supervision of an adult, we should have a reasonable expectation that this will not as a rule endanger its life. Bad things can happen, but they shouldn’t be the norm.

The car has a brake failure

What is the moral significance of the car having a brake failure, rather than any other malfunction?

First, trivially, a brake failure makes sure that the car can continue moving forward, so that the pedestrians are at risk. This serves the purpose of the example.

Second, a brake failure is a rare malfunction that is usually attributable to bad servicing of the vehicle. A regularly inspected car should not experience brake failures.

This means that a brake failure, unlike other possible causes for this scenario (for instance, sudden driver death), puts a moral blame on the vehicle’s owner. Had the owner had the car inspected and serviced regularly, the accident would probably have been avoided. Assuming (as it would normally be the case) that the owner is also the driver of the vehicle, now the driver is to blame for the accident (to some extent). How does this change things?

  • Utilitarianism would probably not see much difference. From a utilitarian perspective, we should demand the government to perform better and more frequent checks on cars, so that the likelihood of such accidents decreases. But otherwise, blame as such is not important to the utilitarian.
  • Kant would see this as more important. As discussed before, when a criminal commits a crime (or a driver drives a car that is dangerous due to the driver’s neglect), justice demands some form of retribution. The driver deserves to be punished (to some extent) for not keeping his car in order, while the pedestrians don’t deserve to die because of this same reason. But again, although the driver deserves to be punished in some way, for example paying a fine, he probably does not deserve to be killed.
Related:  Philosophy topics overview

The people on board are athletes and executives

I am not entirely sure what the significance of this is supposed to be. I’m not even sure that I want to know.

Is the idea that an athlete is more valuable than, say, a philosopher? Or, god forbid, someone who’s bad at sports? Or someone with a disability? Is an executive more valuable than a social worker? Or a housewife? Or is it the opposite entirely? Does the case imply that on the car are only executives and athletes, so crashing it wouldn’t cause much harm? In any case, that part of the specification is somewhat of a mystery.

Of course, different moral theories do have something to say about that:

  • Utilitarianism wouldn’t shy away from considering the value of people in terms of their actual or future contributions to the welfare of mankind. So a technician or a doctor might perhaps be considered more valuable than an executive or athlete, and these perhaps more valuable than an imprisoned criminal. Although the details of such an evaluation can differ, in principle utilitarianism would believe that we should take into account who dies in the accident.
  • Kant, on the other hand, views all persons as equal in value. The specific value of humans consist in them being rational, autonomous beings, who are, at the same time, the “creators of the moral law as well as its subjects.” That means, humans create ethics using their rationality, and then freely decide to obey the very rules that they just made. This is the unique situation of human beings, and what bestows human dignity on them. For Kant, therefore, there shouldn’t be any difference regarding who lives or dies. Also, the number of people dying would be irrelevant to him, as one cannot add up lives numerically. Two dead are not doubly bad compared to one dead. Each human’s worth is unlimited, and so dead people’s values cannot be summed up.
  • For Aristotle, people do have different values. Each person strives toward perfecting himself or herself, but different people are on different places along that way to perfection. The criminal who has no knowledge of what really counts in life is at a low stage, while the philosopher has reached a higher stage. Both can progress further (perhaps indefinitely), but they are certainly not equals in what they have already attained. So killing a philosopher would certainly be worse than killing a criminal (in terms of human value.)

Executives and hermits

Another interesting questions is: why “executives” and “athletes,” rather than mathematicians and composers, or calligraphers and hermits? It seems that the scenario here reflects the cultural prejudices of its creators: students and postgraduates at an elite US university, people who are going to be technology startup founders, and who, in agreement with their social environment, value two things: physical fitness and commercial success.

There’s nothing inherently bad about one’s choice of role models like that. But there is a danger here, that machines that are built to moral specifications like these are going to incorporate these value judgements as part of their code. And when Ford sells a car that incorporates these value judgements to a buyer in China or Qatar, the preference for executives and athletes (over poets and Imams) is going to be exported along with that piece of technology. Not explicitly and openly, but as part of some hidden value judgement, a priority statement in a life-or-death decision that might never actually be executed. But the preference will be there, and this is the danger of this kind of hidden moral imperialism.

Related:  Lawyers, Not Ethicists, Will Solve the Robocar ‘Trolley Problem’

Should we then build cars that favour Imams over executives when we build cars for Islamic audiences? Should we favour athletes over disabled, blacks over whites, women over men, or the opposites in each case? These are important issues that are not currently being addressed.

The prospective victims are children, pregnant women, and pets

Similarly, the choice of victims also tells a story. Children and pets are not fully responsible for their actions, which complicates the jaywalking question (see above). Pregnant women (as well as children) usually symbolise people in need of special protection and care (this is why they get special seats assigned on buses). Of course, the elderly also do, but the case leaves them out. If the car has the choice between killing a pregnant woman and an old man, which one should it be?

We tend to favour the future, and we tend to protect the chances of children and the unborn. We would probably say that the old man “has had his life” already, and that we should therefore favour the young and the unborn. But again, this, if it is going to be a design feature of a self-driving car, needs to be a conscious choice, and not something left to chance or to the unreflected regurgitating of cultural stereotypes by some nameless US programmer.

Human drivers vs autonomous cars

One aspect is easily overlooked in the discussion of self-driving cars: that we should compare the car’s performance not with some Platonic ideal of a car that always acts morally right; but instead, we should compare it with the expected performance of a good, rational human driver. If the car performs consistently better than the human driver, then this is a strong argument in favour of the car. It doesn’t need to be perfect, just better.

Now what would a human driver do in the case discussed here?

Is it at all probable that he would even hesitate? Would any human driver consider killing himself in order to spare a handful of pedestrians who are illegally crossing a red light, and who shouldn’t have been in his way anyway? I don’t think so. The survival instinct of the human driver will clearly dictate to him that the only choice is to avoid the barrier and kill the pedestrians, and that’s it. An autonomous car that would act in this way would therefore act naturally and consistently human.

Let’s also not overlook the fact that this scenario is going to be very rare. Like all these made-up cases, there is a whole chain of conditions that must be fulfilled in order for this case to play out as designed:

  • A traffic light must be present
  • Pedestrians must be crossing it illegally
  • At the same moment a car’s brakes must fail (brake failure is not even mentioned in the top 25 causes of car accidents, which include things like fog, tire blowouts, and street racing).
  • And the road must be blocked by an obstacle that would kill the car’s passengers if hit.

A combination of all these factors is quite rare, indeed. And if an accident like this does happen once, does it mean that we must design self-driving cars with such freak accident scenarios in mind?

So perhaps we can say that, although there is a theoretical risk of a car (self-driving or not) killing some pedestrians in a very low-probability scenario like that, the likelihood is so small as not to warrant further consideration. In the same way, we don’t design (regular) cars to protect their passengers against driving off a bridge, driving into deep water, or coming into the explosion radius of a bomb. Perhaps the whole discussion is without merit, and we should concentrate on solving more realistic problems, of which there also are enough to keep us busy for a long time.

 


Recommended reading: