The Turing Guide

(nextflipdebug5) #1

434 | 39 TURING AND RANDOmNESS


our previous searches. But if the problem has solutions that are reasonably dense in the sample
space, then random methods should succeed.
From a modern perspective, we know of many algorithms that work well randomly. A sim-
ple illustration is what is called ‘polynomial identity testing’ (PIT), concerning the following
problem. Given a polynomial expression in several variables x, y, z, etc., is that polynomial
equal to 0, no matter what I substitute into the variables? For example, xy + 1 is not such a
polynomial since, for example, substituting x = 1, y = 2 would give the answer ‘3’, whereas the
polynomial xy − yx is such an ‘everywhere 0’ polynomial. Although this might seem a strange
and specialized problem, it turns out that many problems can be computationally restated as
instances of PIT, so that we can solve them by solving a PIT algorithm. There are very efficient
randomized algorithms for PIT: in fact, the dumbest algorithm one could imagine works. Take
any random numerical substitutions for x, y, z , . . . and evaluate the polynomial at these values.
If you don’t get 0, the answer is certainly ‘no’, but if you do get 0, say ‘yes’: with high probability
you will be correct, roughly speaking because such polynomials are 0 quite rarely unless they
are always 0.^16
A major theme in modern algorithm design is to use randomness as a resource like this. It
is fair to say that most computational learning algorithms use randomness as a resource, like
those that monitor you on the internet and try to learn your preferences, or those that model
physical phenomena. This use was anticipated by Turing in his writings: he even speculated that
randomness is somehow necessary for intelligence,^17 ,^18 but seems not to have developed this
theme mathematically.
This lack of mathematical development is perhaps unfortunate, as there was an implicit rec-
ognition of algorithm complexity in Turing’s writings. Such recognition came from cryptog-
raphy, where large search spaces are used to hide information, and from practical computing,
where an answer must be found in real time. Even Turing’s writings on intelligence and the
limitations of machines were full of calculations as to the numbers of possible states in comput-
ers, etc. Had he pushed his ideas further, perhaps Turing could have developed the mathemat-
ical theory of computational complexity (only really investigated since the 1970s)—but this is
complete speculation. It is clear that he had an intuitive understanding of the time and space
complexity of computation.
These complexity issues are very deep. It is a long-standing open question as to whether PIT
can be solved by a non-randomized deterministic algorithm in polynomial time. Although
we have no idea how to construct such an algorithm, is believed that such an algorithm exists.
This is because of the work of Russell Impagliazzo and Avi Wigderson, who proved a certain
hardness/randomness trade-off: their truly remarkable result says roughly that if certain prob-
lems are as hard as we think they are, then all problems (like PIT) with efficient randomized
polynomial-time algorithms have polynomial-time derandomized versions.^19
This result is remarkable because it says that if (for example) finding a route through all the
cities in a map without repetition is really hard in terms of efficiency, then this other class of
tasks, including PIT, is easy. It is an important result, since to show that some task can be done
easily, we usually show that it can be done by another task that can itself be done easily. Many,
many tasks can be reduced to instances of solving PIT: so, if we can prove that certain problems
are as hard as we think they are, then we would have revolutionary new ways of solving many
problems.
For a long time, determining whether a number is prime had the same status—that is, pri-
mality testing had randomized algorithms that had been known for many years and work very

Free download pdf