Computational Physics

(Rick Simeone) #1

16 High performance computing and parallelism


16.1 Introduction


It is not necessary to recall the dramatic increase in computer speed and the drop
in cost of hardware over the last two decades. Today, anyone can buy a computer
with which all of the programs in this book can be executed within a reasonable
time – typically a few seconds to a few hours.
On the other hand, if there is one conclusion to be drawn from the enormous
amount of research in computational physics, it should be that for most physical
problems, a realistic treatment, one without severe approximations, is still not within
reach. Quantum many-particle problems, for example, can only be treated if the cor-
relations are treated in an approximate way (this does not hold for quantum Monte
Carlo techniques, but there we suffer from minus-sign problems when treating
fermions; see Chapter 12). It is easy to extend this list of examples.
Therefore the physical community always follows the developments in hardware
and software with great interest. Developments in this area are so fast that if a
particular type of machine were presented here as being today’s state of the art,
this statement would be outdated by the time the book is on the shelf. We therefore
restrict ourselves here to a short account of some general principles of computer
architecture and implications for software technology. The two main principles are
pipelining and parallelism. Both concepts were developed a few decades ago, but
pipelining became widespread in supercomputers from around 1980 and has found
its way into most workstations, whereas parallelism has remained more restricted
to the research community and to more expensive machines. The reason for this is
that it is easier to modify algorithms to make them suitable for pipelining than it
is for parallelism. Recently, the dual-core processor has started to find its way into
consumer PCs.


540
Free download pdf