Aims and Claims of AI

Feature, Philosophy, Philosophy-AI-Book

Aims of AI

What is the central aim of Artificial Intelligence research?

One might think that ultimately, it is to create something like an artificial human being. This is what science fiction has been suggesting for decades. But if we look a little closer at what this would mean, it quickly becomes clear that AI cannot possible want to create artificial human beings.

Why not? Well, imagine you tell your AI radio to play some pop music, and you get the reply: “No, Dave, I’m not in the mood for pop. Let’s listen to some Bach instead.”

Clearly, when we make machines, we want them to serve us. We don’t want them:

  1. … to have rights (like we do),
  2. … to forget things (like we do),
  3. … to fall in love (like we do),
  4. … to be in bad mood,
  5. … to spend time and money on themselves,
  6. … or to take 18 years of learning in order to be useful.

Such an artificial human would essentially be equivalent to a biological one, and then the question would arise: why create such a thing artificially, if we already have a reliable and fun way of doing it biologically?

So, clearly, there must be something else that AI researchers are after.

Psychological and phenomenal concepts of mind (Chalmers)

Philosophers tend to distinguish between two different concepts of the mind: the phenomenal and the psychological mind.

For example, David J. Chalmers explains (“The Conscious Mind”):

  • The phenomenal “… is the concept of mind as conscious experience, and of a mental state as a consciously experienced mental state” (p.11)
  • The psychological “… is the concept of mind as the causal or explanatory basis for behaviour … it plays the right sort of causal role in the production of behaviour, or at least plays an appropriate role in the explanation of behaviour … What matters is the role it plays in a cognitive economy” (p.11)
Related:  Alan Turing (1912-1954)

We talked before about functionalism: the idea that something is defined as what it is by the role it plays in the context of its operation. A wing is a wing if it plays the role wings play. It doesn’t matter if it’s made of feathers, steel, or the material of a butterfly’s wing.

Chalmers explains further:

“On the phenomenal concept, mind is characterized by the way it feels; on the psychological concept, mind is characterized by what it does.”

Much of the discussion on mind has to do with phenomenal properties. It is not relevant to AI that only behaves like humans.

It would be relevant to AI that feels like humans do.

“Strong” and “weak” AI

These terms are used in different ways nowadays, but the most interesting philosophical distinction is this:

  • “Strong AI”: An artificial intelligence system that can think and have a mind, mental states, and consciousness.
  • “Weak AI”: Machines that can demonstrate (simulate?) intelligence, but that do not have a mind, mental states or consciousness.

The distinction weak/strong AI corresponds to the distinction in the Philosophy of Mind between the psychological and the phenomenal aspects of mind:

  • A strong AI system would think.
  • A weak AI system would “think” (note the quotation marks!) In other words, it would only pretend to think.

Normally AI researchers don’t worry about strong AI, since their aim is to create intelligent machines, not minds!

But would we even want phenomenally complete AI systems? It seems that it would not make much practical sense to create such systems. Because, among other things, if they could feel like humans, they might have moods, get tired or bored, and claim time for rest, or perhaps even a right to education, a right to fall in love with other AI systems, and so on. Nobody would probably want machines like that.

Related:  Briefing: The Chinese Room argument

Some criticisms of weak AI

We do already have some impressive weak AI systems, that is, systems that behave intelligently: chess and go playing programs, self-driving cars, face recognition systems, automatic translation software. But still, there are some abilities that a fully “intelligent” AI system should have, and which presently existing systems don’t:

  • No machine has yet passed the Turing Test (or even come close).
  • Machines still have very little common sense and are not good with ambiguities.
  • “Full” intelligence might need a body and an environment to grow in. We will discuss this claim in detail in in another post.

Some criticisms of strong AI

Will strong AI, in the sense of a system that feels its own mental states and has true consciousness, ever be possible?

Of course, the most important question is: how would we ever know, even if it did? It is impossible to say whether other humans experience mental states that are anything like those we experience. Look at someone at the street: how would you know that he has mental states or consciousness that are similar to yours? (See the discussion on the Three Types of Equivalence, in a previous post, for more on that question!). We assume that others are conscious in the same way as we are, but this is, strictly speaking, a lazy assumption. There is no way to really know.

There are other reasons to doubt that we will ever be able to create a machine with strong, phenomenally conscious AI:

  • Searle’s Chinese Room argument (see a previous post)
  • What would be a possible functional organisation (hardware infrastructure) for a machine that has “real” mental states? We don’t know at all. We have no idea what is required, in order to be able to create something that has mental states like ours. For all we know today, it might even be possible that an immortal soul is required, and that no system that lacks a soul will ever be able to be phenomenally conscious. This is not a very fashionable position in cognitive science, but it is impossible to disprove at this time, since we really know nothing about how consciousness comes about in our brains.
  • It is plausible that at least some of the positive and the negative features of human cognition necessarily go together. For instance, the ability of a system to adapt to changes in the environment necessarily requires that the system makes mistakes in its operation, and that it then learns from these mistakes. Or, the ability to learn new things might necessarily require the ability to forget other things (neural networks have a limited capacity to remember, and it is not very precise either, if compared to the ability of a normal, non-AI database to ‘remember’ things!)
  • If this was really the case, then it might happen that we wait 18 years or so to grow an unreliable machine with all the drawbacks of human cognition.
  • Perhaps then it would be better to concentrate on creating limited scope weak AI systems that act intelligently without having true mental states (like the systems we are building today), and to not attempt to build strong, phenomenally conscious AI.
Related:  Aristotle's AI

Related Posts

Leave a Reply