“Give us some rules to implement!”

Feature, Opinion

In the summer of 2016, I was invited to a conference of philosophers and AI researchers in Germany, about the ethics of autonomous machines.

It was an interesting, intriguing conference, and a great experience, and I learned a lot about both AI technologies and machine ethics. But already during the conference, it became apparent that there was a huge, unbridged gap between the two parties: let’s call them the philosophers and the engineers.

Both groups consisted of extremely intelligent, capable and accomplished people, but still, despite their willingness to communicate with each other, they could not find a basis on which to do so successfully. The conference soon fell apart into the two camps, the philosophers and the engineers, and actually very little communication took place between them.

What were the reasons? Thinking about it later, long after the meetings were over, it became clear to me that there were different reasons.

First, philosophers are in the business of thinking and of creating more and more elaborate and complex thought experiments; while engineers are interested in solving a particular engineering problem, even if this solution is not perfect, and then moving on. For a philosopher, “moving on” is seldom an attractive option. We tend to revisit the same problems again and again, often for thousands of years, until we can finally declare ourselves satisfied that a problem is, finally, solved (or, more commonly, re-framed in more modern terms, so that the discussion about it can be rebooted under a different angle).

Second, there is no common vocabulary that the two camps can understand. Philosophers use terms like consequentialism and virtue ethics, while engineers speak of reinforcement learning and convolutional networks, and neither party is therefore able to understand what the other is talking about. This makes it easy to dismiss the other party’s work as irrelevant to one’s own.

Related:  Alan Turing (1912-1954)

I still remember vividly the exasperated cry of an AI expert: “Can you philosophers not just give us some ethics rules that we can implement?” — This was a perfectly sensible and honest request, but, of course, it sounds almost silly to a philosopher. After all, the whole point of philosophising is to question the rules, not to cast them in stone! But the point of engineering is to create a product. And so these two approaches are diametrically opposed.

What can be done, then, to bridge this gap? The worst case is what is happening right now: Engineers just work on their artificial agents without bothering to understand philosophy, and they just create ad hoc implementations of moral principles whenever they feel that they need them. These implementations are sometimes very naive and uninformed philosophically, but, given that the philosophers don’t care to do a better job at educating the engineers about ethics, they are the best implementations we have.

So what we need is some kind of exchange. Developers and engineers really need to learn to approach each other. This does not only mean that they are willing to listen to each other (which they already are). But also that the other party is willing to understand what the others need: the philosophers need to understand the engineering mind-set, and the needs of an engineer who has to deliver a self-driving car, and who cannot wait for another thousand years until perfect solutions to the trolley problems have been found. The philosophers need, in this case, to be able to stop questioning and deliberating, and to just provide the engineers with something. Something that is better than what the engineers would have come up with by themselves, anyway. Of course, this also means that philosophers need to understand a little of the engineering aspects of artefact morality. Not much, but enough to be able to grasp the difference between a PROLOG-based action planner, and a reinforcement-trained neural network.

Related:  Aims and Claims of AI

On the other hand, the engineers need to understand enough moral theory to be able to create educated, intelligent, sophisticated implementations that avoid the most obvious fallacies and pitfalls. They should understand that a deontological approach to ethics is fundamentally different from a consequentialist one, and which one is more likely to lead to a practical implementation in their specific domain.