The Turing Guide

(nextflipdebug5) #1

430 | 39 TURING AND RANDOmNESS


we obtain the number 0.2357111317. . . . Remarkably, this number is known to be normal to
base 10, as shown in 1946 by Arthur Copeland and Paul Erdős,^3 but it is not known whether the
primes written in base 2 result in such a normal sequence to base 2. As with many problems in
number theory, it is easy to make conjectures, but hard to prove things!
Failing to have natural examples of absolutely normal numbers, we might ask what would
be an explicit example. In a manuscript thought to have been written around 1938, but never
published in his lifetime,^4 Turing suggested that an absolutely normal number n would be
explicitly constructed if the number were ‘computable’: this means that we can construct a
Turing machine that could compute the expansion of n to any desired precision. Any irrational
number met in ‘normal mathematics’ will have this property, as all have rapidly converging
approximations; for example, the series e = 1 + 1/1! + 1/2! + 1/3! + . . . converges rapidly, and
hence its limit e = 2.718281828459 ... is computable. Similar series are known that converge
rapidly to π, and so π is also computable, as is any other ‘natural’ constant.
As we shall see, Turing’s approach anticipates a line of research that can be traced back to
the early 20th century, but was realised only in the early 1960s. Before we look at Turing’s work
in greater detail we briefly describe this body of work. This analysis will enable us to describe
exactly what Turing did, and how he did it.


The events of the 1960s and 1970s


Following unsuccessful historical attempts to capture an account of algorithmic randomness
based on work by Richard von Mises, Abraham Wald, Alonzo Church, and others, three mod-
ern paradigms developed in the 1960s and 1970s by Andrey Kolmogorov, Ray Solomonoff, Per
Martin-Löf, Leonid Levin, and others:


•    A  random  sequence    should  have    no   computably rare    properties  (the    statistician’s  
approach).
• A random sequence should have no regularities that would allow for compression of
information. (This has become known as the ‘coder’s approach’ since it encapsulates the
idea of writing computer code to describe a string.)
• A random sequence should not have any predictability (the gambler’s approach).

These clarifications arise from the celebrated works of a number of distinguished authors in the
1960s and 1970s. As we shall see later, Turing’s work anticipated some of these developments
in many ways.
The gambler’s approach, the idea that we should not be able to predict the bits of a random
sequence, goes back to von Mises at the beginning of the 20th century. But it was properly devel-
oped by Claus Peter Schnorr.^5 We idealize a person betting on the bits of a infinite sequence.
Think of them as heads and tails. So you first bet on the first bit: you can bet either heads or
tails, 1 or 0, and after you have bet, the outcome is revealed. For example, suppose that you start
with $3, and bet $1 on heads. If heads come up, you then have $4, and if tails appear you have
$2. Next, according to your current capital and the history, you bet on the next bit, and so on.
As opposed to a real casino, you can bet any fraction that you like.^6
So taking the gambler’s approach, we want to argue that we can defeat any algorithmic plan
that a person might have. Thus we think of ‘computable betting strategies’: a real number should
be random if no computable betting strategy can make infinite capital by betting on its bits.

Free download pdf