The Turing Guide

(nextflipdebug5) #1

NOTES TO PAGES 298–306 | 513



  1. Eric Brown, at a NEOSA Tech Week event in Cleveland (18 April 2012) (http://www.youtube.com/
    watch?v=bfLdgDYjC6A).

  2. K. Jennings, ‘Watson, Jeopardy and me, the obsolete know-it-all’, TEDxSeattleU, filmed February
    2013 (http://www.ted.com/talks/ken_jennings_watson_jeopardy_and_me_the_obsolete_know_it_
    all); John Searle is quoted in Weber (Note 36); A. Levinovitz, ‘The mystery of Go, the ancient game
    that computers still can’t win’, Wired (12 May 2014).

  3. D. Michie (Note 27), p. 20; G. Jefferson (Note 35), p. 110 (Turing quotes Jefferson’s words in Turing
    (1950), p. 451).

  4. Turing (1950), p. 452; Turing (1947), p. 394.

  5. Turing (1951), p. 485; Turing (1948), p. 410; Turing (1950), p. 450.

  6. For Turing’s attitude to this reaction, see D. Proudfoot, ‘Mocking AI panic’, IEEE Spectrum, 52(7)
    (2015), 46–7.

  7. H. L. S. (Viscount) Samuel, Essay in Physics, Blackwell (1951), pp. 133–4; H. Cohen, ‘The status of
    brain in the concept of mind’, Philosophy, 27(102) (1952), 195–210, p. 206; Turing (1950), pp. 455, 459;
    Turing et al. (1952), p. 500.

  8. Turing (1952), p. 459.


CHAPTER 28 TURING’S CONCEPT Of INTEllIGENCE (PROUDfOOT)



  1. The central argument in this paper is developed in D. Proudfoot, ‘Rethinking Turing’s test’, Journal of
    Philosophy, 110 (2013), 391–411; see also D. Proudfoot, ‘Anthropomorphism and AI: Turing’s much
    misunderstood imitation game’, Artificial Intelligence, 175 (2011), 950–7.

  2. It is the capacity for or tendency to ‘thinking’ behaviour that is the thinking, according to the behav-
    iourist. All references to behaviourism in the text should be understood in this way.

  3. See W. Mays and D. G. Prinz, ‘A relay machine for the demonstration of symbolic logic’, Nature,
    165(4188) (1950), 197–8.

  4. See W. Mays, ‘Can machines think?’, Philosophy, 27(101) (1952), 148–62, pp. 151, 160, 151. Like Mays,
    many read Turing as claiming that if the machine’s behaviour i s n’ t indistinguishable from that of the
    human contestant, it doesn’t think. But Turing made it clear that an intelligent machine might do
    badly in his game.

  5. P. Ziff, ‘About behaviourism’, Analysis, 18(6) (1958), 132–6, p. 132.

  6. Mays (Note 4), pp. 149, 162, and 150.

  7. Mays (Note 4), p. 158.

  8. Turing (1948), p. 431.

  9. Turing et al. (1952), p. 495; Turing (1950), p. 441; Copeland (in The Essential Turing, p. 436) points out
    Turing’s protocol for scoring the game.

  10. Turing et al. (1952), pp. 495, 503.

  11. The 1952 game is played by both machine and human contestants, but only one contestant is inter-
    viewed at a time.

  12. Turing et al. (1952), p. 496.

  13. Except insofar as the machine must be able to produce the appropriate behaviour in real time: see
    Proudfoot 2013 (Note 1), pp. 400–1.

  14. Turing (1948), p. 411; Turing (1948), p. 431; and Turing et al. (1952), p. 500.

  15. Turing (1948), p. 431.

  16. A ‘paper machine’ is a human being ‘provided with paper, pencil, and rubber, and subject to strict
    discipline’, carrying out a set of rules (Turing (1948), p. 416).

  17. Why did Turing design a 3-player game? See Proudfoot 2013 (Note 1), pp. 409–10.

  18. R. A. Brooks, ‘Intelligence without reason’, in L. Steels and R. A. Brooks (eds), The Artificial Life Route
    to Artificial Intelligence, Lawrence Erlbaum (1995), 25–81, p. 57.

  19. Turing (1948), p. 412.

  20. Turing (1950), p. 449; Turing et al. (1952), p. 495.

  21. Turing et al. (1952), p. 495; Turing (1950), p. 442.

Free download pdf