Concepts of Programming Languages

(Sean Pound) #1
13.1 Introduction 577

example the interpretation of client-side scripting code. Another example is
the software systems that are designed to simulate actual physical systems that
consist of multiple concurrent subsystems. For all of these kinds of applications,
the programming language (or a library or at least the operating system) must
support unit-level concurrency.
Statement-level concurrency is quite different from concurrency at the unit
level. From a language designer’s point of view, statement-level concurrency
is largely a matter of specifying how data should be distributed over multiple
memories and which statements can be executed concurrently.
The goal of developing concurrent software is to produce scalable and
portable concurrent algorithms. A concurrent algorithm is scalable if the
speed of its execution increases when more processors are available. This is
important because the number of processors increases with each new genera-
tion of machines. The algorithms must be portable because the lifetime of
hardware is relatively short. Therefore, software systems should not depend
on a particular architecture—that is, they should run efficiently on machines
with different architectures.
The intention of this chapter is to discuss the aspects of concurrency that
are most relevant to language design issues, rather than to present a definitive
study of all of the issues of concurrency, including the development of concur-
rent programs. That would clearly be inappropriate for a book on programming
languages.

13.1.1 Multiprocessor Architectures


A large number of different computer architectures have more than one processor
and can support some form of concurrent execution. Before beginning to discuss
concurrent execution of programs and statements, we briefly describe some of
these architectures.
The first computers that had multiple processors had one general-purpose
processor and one or more other processors, often called peripheral processors,
that were used only for input and output operations. This architecture allowed
those computers, which appeared in the late 1950s, to execute one program
while concurrently performing input or output for other programs.
By the early 1960s, there were machines that had multiple complete
processors. These processors were used by the job scheduler of the operat-
ing system, which distributed separate jobs from a batch-job queue to the
separate processors. Systems with this structure supported program-level
concurrency.
In the mid-1960s, machines appeared that had several identical partial pro-
cessors that were fed certain instructions from a single instruction stream. For
example, some machines had two or more floating-point multipliers, while
others had two or more complete floating-point arithmetic units. The compil-
ers for these machines were required to determine which instructions could be
executed concurrently and to schedule these instructions accordingly. Systems
with this structure supported instruction-level concurrency.
Free download pdf