Concepts of Programming Languages

(Sean Pound) #1

578 Chapter 13 Concurrency


In 1966, Michael J. Flynn suggested a categorization of computer architec-
tures defined by whether the instruction and data streams were single or multiple.
The names of these were widely used from the 1970s to the early 2000s. The two
categories that used multiple data streams are defined as follows: Computers that
have multiple processors that execute the same instruction simultaneously, each
on different data, are called Single-Instruction Multiple-Data (SIMD) architec-
ture computers. In an SIMD computer, each processor has its own local memory.
One processor controls the operation of the other processors. Because all of the
processors, except the controller, execute the same instruction at the same time,
no synchronization is required in the software. Perhaps the most widely used
SIMD machines are a category of machines called vector processors. They
have groups of registers that store the operands of a vector operation in which
the same instruction is executed on the whole group of operands simultaneously.
Originally, the kinds of programs that could most benefit from this architecture
were in scientific computation, an area of computing that is often the target of
multiprocessor machines. However, SIMD processors are now used for a variety
of application areas, among them graphics and video processing. Until recently,
most supercomputers were vector processors.
Computers that have multiple processors that operate independently but
whose operations can be synchronized are called Multiple-Instruction Multiple-
Data (MIMD) computers. Each processor in an MIMD computer executes
its own instruction stream. MIMD computers can appear in two distinct con-
figurations: distributed and shared memory systems. The distributed MIMD
machines, in which each processor has its own memory, can be either built in
a single chassis or distributed, perhaps over a large area. The shared-memory
MIMD machines obviously must provide some means of synchronization to
prevent memory access clashes. Even distributed MIMD machines require syn-
chronization to operate together on single programs. MIMD computers, which
are more general than SIMD computers, support unit-level concurrency. The
primary focus of this chapter is on language design for shared memory MIMD
computers, which are often called multiprocessors.
With the advent of powerful but low-cost single-chip computers, it became
possible to have large numbers of these microprocessors connected into small
networks within a single chassis. These kinds of computers, which often use
off-the-shelf microprocessors, have appeared from a number of different
manufacturers.
One important reason why software has not evolved faster to make use of
concurrent machines is that the power of processors has continually increased.
One of the strongest motivations to use concurrent machines is to increase
the speed of computation. However, two hardware factors have combined to
provide faster computation, without requiring any change in the architecture
of software systems. First, processor clock rates have become faster with each
new generation of processors (the generations have appeared roughly every 18
months). Second, several different kinds of concurrency have been built into
the processor architectures. Among these are the pipelining of instructions and
data from the memory to the processor (instructions are fetched and decoded
Free download pdf