[Python编程(第4版)].(Programming.Python.4th.Edition).Mark.Lutz.文字版

(yzsuai) #1

global count
with addlock:
count = count + 1 # auto acquire/release around stmt
time.sleep(0.5)
with addlock:
count = count + 1 # only 1 thread updating at once


addlock = threading.Lock()
threads = []
for i in range(100):
thread = threading.Thread(target=adder, args=(addlock,))
thread.start()
threads.append(thread)


for thread in threads: thread.join()
print(count)


Although some basic operations in the Python language are atomic and need not be
synchronized, you’re probably better off doing so for every potential concurrent up-
date. Not only might the set of atomic operations change over time, but the internal
implementation of threads in general can as well (and in fact, it may in Python 3.2, as
described ahead).


Of course, this is an artificial example (spawning 100 threads to add twice isn’t exactly
a real-world use case for threads!), but it illustrates the issues that threads must address
for any sort of potentially concurrent updates to shared object or name. Luckily, for
many or most realistic applications, the queue module of the next section can make
thread synchronization an automatic artifact of program structure.


Before we move ahead, I should point out that besides Thread and Lock, the
threading module also includes higher-level objects for synchronizing access to shared
items (e.g., Semaphore, Condition, Event)—many more, in fact, than we have space to
cover here; see the library manual for details. For more examples of threads and forks
in general, see the remainder this chapter as well as the examples in the GUI and net-
work scripting parts of this book. We will thread GUIs, for instance, to avoid blocking
them, and we will thread and fork network servers to avoid denying service to clients.


We’ll also explore the threading module’s approach to program exits in the absence of
join calls in conjunction with queues—our next topic.


The queue Module


You can synchronize your threads’ access to shared resources with locks, but you often
don’t have to. As mentioned, realistically scaled threaded programs are often structured
as a set of producer and consumer threads, which communicate by placing data on,
and taking it off of, a shared queue. As long as the queue synchronizes access to itself,
this automatically synchronizes the threads’ interactions.


204 | Chapter 5: Parallel System Tools

Free download pdf