7.1 PARALLELISM INTRODUCTION 181
plete computer system that could function on its own. Parallelism is achieved
by the way that tasks are assigned to each of the computers in the cluster by a
central controlling computer.
In a tightly coupled machine, each of the processors shares a centralized
memory. Communication between the processors is done by one processor
writing information into memory and then one or more processors reading
that information back out. An example of this communication will be given in
Section 7.3.
Processor Communication
In a loosely coupled machine, we said that the processors communicate over
cables or wires. We now look at some of the possible configurations that are
possible for these processors and wires. At one extreme is a fully connected
network, where each processor has a connection to every other processor. At
the other extreme is a linear network, where the processors are laid out in a
line, and each processor is connected to the two immediately adjacent (except
for the two ends, which have only one adjacent processor). In a fully con-
nected network, information can flow quickly between processors, but at the
high cost for the extensive amount of wiring that is necessary. In a linear net-
work, information travels more slowly because it must be passed from proces-
sor to processor until it reaches its destination, and single point failure disrupts
the information flow. This is not surprising if you recall our discussion of
biconnected components in Chapter 6.
An alternative to a linear network that improves reliability is a ring network,
which is like a linear network, except that the first and last nodes in the line are
also connected. Information can now travel more quickly because it will only
need to be passed through at most one-half of the processors. Notice in Fig.
7.1 that a message from node 1 to node 5 would have to pass through three
12
123 4 5 6
54
■FIGURE 7.1 6 3
A fully connected
and a linear
network
configuration