CAmPBEll-kElly | 215
he was the sole member of his section, but he was joined by James Wilkinson in May 1946.
Although Turing left the NPL in September 1947, Wilkinson proved an outstanding successor.
He made fundamental contributions to numerical methods, was elected a Fellow of the Royal
Society in 1969, and in the following year received the Turing Award of the Association for
Computing Machinery, computing’s highest honour. Others who joined the ACE section later
included Mike Woodger and Brian Munday.
The ACE design
Almost all of the early computer projects in the United States and the United Kingdom were
based closely on the EDVAC report of June 1945 and the subsequent reports produced by von
Neumann and his colleague Herman Goldstine at the IAS. Although Turing took many ideas
from EDVAC, ACE was by far the most original design of its era. Turing submitted his ‘Proposal
for the development in the Mathematics Division for an Automatic Computing Engine (ACE)’
in March 1946.^3 It was remarkably detailed, and even included a cost estimate of £11,200 (which,
inevitably, was optimistic by a factor of at least 10). The proposal cited the EDVAC report and
used its terminology, memory structure, adding technique, and logic notations.
To appreciate ACE’s novelty it is necessary to know a little of how EDVAC operated and its
memory technology. The EDVAC report specified that a computer would consist of five func-
tional parts: the control, the arithmetic unit, the memory, and the input and output devices. A
key characteristic of EDVAC was that both programs and numbers were stored in the memory.
The memory consisted of a sequence of storage locations numbered from zero upwards, each
of which could store either an instruction or a number. A program was placed in a sequence
of consecutive memory locations. To obey the program, instructions would be fed from the
memory into the control unit and then executed in ascending sequence (unless interrupted by
a ‘branch’ instruction). Turing recognized that executing instructions sequentially in this way
was inherently inefficient.
A typical delay line consisted of a 5-foot mercury-filled steel tube with acoustic transducers
at each end. In order to store data, electronic pulses (‘bits’) were converted to acoustic energy at
one end of the tube; these travelled at the speed of sound down the mercury column and were
reconverted to electrical pulses at the other end. The electric pulses would then be re-injected
into the input end of the delay line. In this way, a stream of bits would be trapped inside the
delay line indefinitely. Sound travelled very much more slowly than electricity, so a sonic pulse
would take about 1 millisecond (ms) to travel down the tube. The delay line could store approxi-
mately 1000 bits (or, more usefully, 1024 bits). The complete memory would consist of a bank
of several mercury delay lines. Turing envisaged between 50 and 500 delay lines, giving a total
memory capacity of between 1500 and 150,000 instructions or numbers—or ‘words’.
The delay-line memory had a ‘waiting time’ problem, later called ‘latency’. An instruction or
number could not be utilized by the computer until it emerged from the delay line. On average,
the waiting time was 0.5 ms. The result of this latency was that the speed of an EDVAC-type
machine was constrained by the speed of the delay line, almost irrespective of the speed of the
arithmetic circuits. EDSAC, for example, which used one-millisecond delay lines, managed just
650 instructions per second, well below its potential.
Turing’s insight—which was not described in his ACE report of March 1946, but appeared
a few months later—was that, instead of arranging instructions sequentially in the memory,