Some interesting definitions of AI are collected at the end of the first chapter of Russell and Norvig’s famous textbook “Artificial Intelligence. A Modern Approach” (AIMA). Let’s have a brief look at them, and see whether we can make sense of them, criticise or improve them.
Systems that do things that require intelligence of humans
A possible definition of AI would be one that looks at machine behaviour and that compares this to the equivalent human behaviour (read more about behaviourism here):
- “AI is concerned with building machines that can act and react appropriately, adapting their response to the demands of the situation. Such machines should display behaviour compatible with that considered to require intelligence in humans.” (Finlay-Dix)
- “The act of creating machines that perform functions that require intelligence when performed by people.” (Kurzweil, 1990)
What about these definitions? Can we see any way to criticise them?
“Behaviour compatible with what is considered to require intelligence in humans:” Compatible is a very weak word here, and is probably not what the author meant at all. ‘Compatible’ just means that the machine’s behaviour should not cause a contradiction or be impossible to perform at the same time as a behaviour that would require intelligence in humans.
So, for example, eating is compatible with playing chess. Assuming that playing chess is the behaviour that requires intelligence when performed by humans, then eating would, according to that definition, also be an AI behaviour, because eating is compatible with playing chess (one can do both at the same time). It seems strange to elevate mere compatibility with an intelligent behaviour to a criterion for intelligent behaviour.
What the authors probably mean is probably not ‘compatible’ but ‘similar’ or ‘equivalent.’ Kurzweil’s definition of AI, even more simply, requires AI to display the same behaviours that require intelligence when performed by humans, eliminating the similarity or equivalence requirement.
Still, this doesn’t seem to reflect what we actually do when we attribute intelligence to machines. Consider machines performing the following functions:
- Adding two numbers.
- Changing money when a customer buys a coke.
- Regulating the room temperature by turning an air-conditioner on or off.
Obviously, these functions do require intelligence when performed by humans, but they can be executed by primitive, non-intelligent machines: calculators, coke vending machines, air-conditioner thermostats. All of these can be constructed in purely mechanical ways that don’t even require any computer technology. Thermostats and drink vending machines have existed before computers were widespread, and various experiments with computers built out of wooden toy blocks, or strings and other mechanical means, have confirmed that simple arithmetic can be implemented in purely mechanical ways and by relatively simple hardware.
Systems that think like humans
Another approach is represented by the definitions by Haugeland and Bellman:
- “The exciting new effort to make computers think … machines with minds, in the full and literal sense.” (Haugeland, 1985)
- “The automation of activities that we associate with human thinking, activities such as decision-making, problem solving, learning…” (Bellman, 1978)
Haugeland and Bellman have very different definitions here. While Haugeland’s applies only to strong AI (“minds in the full and literal sense”), Bellman’s is a behavioural desciption that could, in principle, be satisfied by a weak AI system.
Systems that act like humans
“The study of how to make computers do things at which, at the moment, people are better.” (Rich and Knight, 1991)
This definition also has its problems. For example, what about digestion? At the moment, people are much better at it than machines. If we created a digesting machine, would this qualify as AI?
Another problem is that this definition is self-defeating. As soon as machines get better than humans at any activity X, doing X will stop being an example of AI. This would also apply to core AI activities like playing the game of Go. Since today people are not better any more in playing Go, AlphaGo would not qualify as AI, and would have only a historical claim to be recognised as AI. Obviously, something is wrong with this approach.
Systems that think rationally
“The study of mental faculties through the use of computational models” (Charniak and McDermott, 1985)
“The study of the computations that make it possible to perceive, reason, and act.” (Winston, 1992)
Charniak/McDermott’s definition confuses the study of something with the subject itself. AI (as opposed to physics or history) is not a pure science that can be exhaustively described as the “study” of something. Instead, it is an engineering discipline that aims not only to study, but to create machines that perform particular functions. What Charniak and McDermott describe here would be better called computational cognitive science or something like that, but not AI.
Winston’s definition, on the other hand, is too narrow. It presupposes that intelligence is computation, which is begging the question. We don’t know whether cognition is nothing but computation (and there are reasons to be sceptical). This definition might arguably not even cover deep neural networks (see later for a detailed explanation), since neural networks are not created by studying or understanding “computations” that underlie mental activities. And again, as with the previous definition, the engineering aspect of AI is entirely neglected and artificial intelligence is reduced to a mere field of (theoretical) study.
Systems that act rationally
“Computational intelligence is the study of the design of intelligent agents.” (Poole et al, 1998)
“AI … is concerned with intelligent behaviour in artefacts.” (Nilsson, 1998)
Of course, both these definitions are circular and fail to define what artificial intelligence is about.
In conclusion, we don’t seem to know much more about what AI is after reading these definitions. But this need not be a deal breaker. It would probably be equally difficult to define “physics” or “theatre” in a general way. This does not mean that physics is not a valid area of study, or that we don’t know a theatre production when we see it. We should just be aware that defining things is hard, and that there can be substantial disagreements between thinkers regarding what artificial intelligence is actually all about.