The Turing Guide

(nextflipdebug5) #1

432 | 39 TURING AND RANDOmNESS


Turing’s idea was first to extend the law of large numbers to ‘blocks’ of numbers. Intuitively one
can see that not only single digits, but fixed blocks of numbers, should occur with the desired
frequency in a random sequence, for otherwise we could bet on the knowledge that they do not.
For example, if we knew that no blocks of the form 1011 appear in the sequence, but that there
were blocks of the form 101, then we would not bet until we saw 101 and then bet that the next
number would be 0, since 1011 does not occur.
The reason for using blocks of numbers is that translating between bases results in correl-
ations between blocks of integers in one base and those in another. Realising this, the key to
Turing’s construction is the following: Turing regarded blocks as generating ‘tests’ with ‘small
measure’, so if we avoid such tests, we should have an absolutely normal number.
Notice how this echoes the 1936 paper where he introduced Turing machines.^12 There it was
not just that Turing solved the technical question of the Entscheidungsproblem, which had argu-
ably already been solved by Church, Emil Post, Stephen Kleene, and others,^13 but he introduced
the fundamental model of computation along with an argument supporting it. In his unpub-
lished manuscript, Turing gave an explicit construction of an absolutely normal number: this
was a fine technical achievement. But, more importantly, he did so by massively generalizing
the problem. His ideas were years in advance of their time.
Here is what Jack Lutz said of this in 2012 in a lecture at a conference in Cambridge:


Placing computability constraints on a nonconstructive theory like Lebesgue measure seems a
priori to weaken the theory, but it may strengthen the theory for some purposes. This vision is
crucial for present-day investigations of individual random sequences, dimensions of individual
sequences, measure and category in complexity classes, etc.


What is fascinating here is the clarity of Turing’s intuition: to construct absolutely normal
numbers, take the classical proof that almost all numbers are absolutely normal, and re-do
this as a computable version. Turing’s tests were sensitive enough to exclude absolutely normal
numbers, but insensitive enough to allow suitable computable sequences to pass them.
It is interesting to speculate about why Turing did not develop the whole apparatus of algo-
rithmic randomness, since he seemed to have many of the ideas needed for this development.
Whilst he had the notion of a pseudo-random number generator, he thought that randomness
was a physical phenomenon. Perhaps he did not see the need for a definition of randomness for
an individual sequence. To me the following quote from 1950 suggests that we can increase the
power of a Turing machine by adding randomness, which seems to suggest that he recognized
the non-computable nature of randomness:^14


An interesting variant on the idea of a digital computer is a ‘digital computer with a random
element.’ These have instructions involving the throwing of a die or some equivalent electronic
process; one such instruction might for instance be, ‘Throw the die and put the resulting number
into store 1000.’ Sometimes such a machine is described as having free will (though I would not
use this phrase myself ).


Interestingly, in the sentences following this, he recognized the difficulty of perception of
randomness:


It is not normally possible to determine from observing a machine whether it has a random
element, for a similar effect can be produced by such devices as making choices depend on the
digits of the decimal for π.

Free download pdf