[Python编程(第4版)].(Programming.Python.4th.Edition).Mark.Lutz.文字版

(yzsuai) #1

simple but fully-functional HTTP (web) server that knows how to run server-side CGI
scripts. We’ll explore those larger server tools in Chapter 15.


Multiplexing Servers with select


So far we’ve seen how to handle multiple clients at once with both forked processes
and spawned threads, and we’ve looked at a library class that encapsulates both
schemes. Under both approaches, all client handlers seem to run in parallel with one
another and with the main dispatch loop that continues watching for new incoming
requests. Because all of these tasks run in parallel (i.e., at the same time), the server
doesn’t get blocked when accepting new requests or when processing a long-running
client handler.


Technically, though, threads and processes don’t really run in parallel, unless you’re
lucky enough to have a machine with many CPUs. Instead, your operating system
performs a juggling act—it divides the computer’s processing power among all active
tasks. It runs part of one, then part of another, and so on. All the tasks appear to run
in parallel, but only because the operating system switches focus between tasks so fast
that you don’t usually notice. This process of switching between tasks is sometimes
called time-slicing when done by an operating system; it is more generally known as
multiplexing.


When we spawn threads and processes, we rely on the operating system to juggle the
active tasks so that none are starved of computing resources, especially the main server
dispatcher loop. However, there’s no reason that a Python script can’t do so as well.
For instance, a script might divide tasks into multiple steps—run a step of one task,
then one of another, and so on, until all are completed. The script need only know how
to divide its attention among the multiple active tasks to multiplex on its own.


Servers can apply this technique to yield yet another way to handle multiple clients at
once, a way that requires neither threads nor forks. By multiplexing client connections
and the main dispatcher with the select system call, a single event loop can process
multiple clients and accept new ones in parallel (or at least close enough to avoid stall-
ing). Such servers are sometimes called asynchronous, because they service clients in
spurts, as each becomes ready to communicate. In asynchronous servers, a single main
loop run in a single process and thread decides which clients should get a bit of attention
each time through. Client requests and the main dispatcher loop are each given a small
slice of the server’s attention if they are ready to converse.


Most of the magic behind this server structure is the operating system select call,
available in Python’s standard select module on all major platforms. Roughly,
select is asked to monitor a list of input sources, output sources, and exceptional
condition sources and tells us which sources are ready for processing. It can be made
to simply poll all the sources to see which are ready; wait for a maximum time period
for sources to become ready; or wait indefinitely until one or more sources are ready
for processing.


820 | Chapter 12: Network Scripting

Free download pdf