306 | 28 TURING’S CONCEPT Of INTEllIGENCE
if the only light source in the environment is itself coloured, this is not a normal condition. In
these cases an object’s looking yellow would not suffice for it to be yellow. Likewise response-
dependence theorists may claim that an action is morally right if (and only if ) normal observers
in normal conditions feel obliged to perform the action. A sociopath is not a normal observer,
and a world in which neither suffering nor sympathy exists is not a normal condition. Even in
the case of the concept of beauty, theorists attempt to specify normal or ‘ideal’ observers and
conditions. They may say, for example, that a normal observer is someone who has an evolved,
brain-based ‘sense’ of beauty and normal conditions are those that facilitate (or do not obstruct)
this sense. Even beauty is not in the eye of just any beholder.
Turing was aware that humans are tempted to ‘imagine intelligence’ in manifestly unintelli-
gent machines. In the 1948 report in which he described playing chess against a paper machine,
he said that playing against such a machine ‘gives a definite feeling that one is pitting one’s wits
against something alive’.^19 Turing’s 1950 and 1952 imitation games specify normal observers
and conditions for judgements of intelligence in machines. Together these exclude cases where
a machine looks intelligent but (we want to say) is not intelligent. According to Turing, a normal
observer (that is, an imitation-game interrogator) is ‘average’ and ‘should not be expert about
machines’.^20 Normal conditions are the interview rules set out in the 1950 game: any question
is allowed, including comments from the interrogator such as ‘I put it to you that you are only
pretending to be a man’, and the machine is required to answer questions on ‘almost any one of
the fields of human endeavour that we wish to include’.^21
In sum, a machine is intelligent (or thinks) if, in the conditions of the 1950 computer-
imitates-human game, it appears intelligent to an average interrogator.^22 An AI researcher who
knows the weaknesses of artificial systems—the sort of judge often found in the annual Loebner
Prize Contest in Artificial Intelligence—is not a normal observer. Also, asking only formulaic
questions about the weather—which might enable the sort of simple chatbot that does well in
Loebner Contests to appear intelligent—is not a normal condition. In the Turing test of intelli-
gence in machines, not just anything goes.
Is the concept of free will an ‘emotional’ concept?
To build a thinking machine, Turing proposed beginning with a simple ‘unorganised’ machine
and teaching it as we do a human child; the machine is to go beyond its programming and
make its own ‘choices’ and ‘decisions’ (see Chapter 30). Critics of AI, however, claim that eve-
rything a machine does is the result of programming. For example, Geoffrey Jefferson, another
participant in Turing’s 1952 radio discussion and whose views Turing targeted in ‘Computing
machinery and intelligence’, said:
It can be urged, and it is cogent argument against the machine, that it can answer only problems
given to it, and, furthermore, that the method it employs is one prearranged by its operator . . .
It is not enough, therefore, to build a machine that could use words (if that were possible), it
would have to be able to create concepts and to find for itself suitable words in which to express
additions to knowledge that it brought about. Otherwise it would be no more than a cleverer
parrot, an improvement on the typewriting monkeys which would accidentally in the course of
centuries write Hamlet.