97 Things Every Programmer Should Know

(Chris Devlin) #1

(^114) 97 Things Every Programmer Should Know


Message Passing Leads


to Better Scalability


in Parallel Systems


Russel Winder


POGRAMMERSR ARE TAUGHT from the very outset of their study of computing
that concurrency—and especially parallelism, a special subset of concurrency—
is hard, that only the very best can ever hope to get it right, and even they get it
wrong. Invariably, there is great focus on threads, semaphores, monitors, and
how hard it is to get concurrent access to variables to be thread-safe.


True, there are many difficult problems, and they can be very hard to solve.
But what is the root of the problem? Shared memory. Almost all the problems
of concurrency that people go on and on about relate to the use of shared
mutable memory: race conditions, deadlock, livelock, etc. The answer seems
obvious: either forgo concurrency or eschew shared memory!


Forgoing concurrency is almost certainly not an option. Computers have more
and more cores on an almost quarterly basis, so harnessing true parallelism
becomes more and more important. We can no longer rely on ever-increasing
processor clock speeds to improve application performance. Only by exploit-
ing parallelism will the performance of applications improve. Obviously, not
improving performance is an option, but it is unlikely to be acceptable to users.


So can we eschew shared memory? Definitely.


Instead of using threads and shared memory as our programming model, we
can use processes and message passing. Process here just means a protected
independent state with executing code, not necessarily an operating system
process. Languages such as Erlang (and occam before it) have shown that

Free download pdf