Chapter 7 ■ Server arChiteCture
124
def start_threads(listener, workers=4):
t = (listener,)
for i in range(workers):
Thread(target=zen_utils.accept_connections_forever, args=t).start()
if name == 'main':
address = zen_utils.parse_command_line('multi-threaded server')
listener = zen_utils.create_srv_socket(address)
start_threads(listener)
Note that this is only one possible design for a multithreaded program: the main thread starts n server threads
and then exits, confident that those n threads will run forever and thus keep the process alive. Other options are
possible. The main thread could stay alive, for example, and become a server thread itself. Or it could act as a monitor,
checking periodically to make sure that the n server threads are still up and restarting replacement threads if any of
them die. A switch from threading.Thread to multiprocessing.Process would give each thread of control its own
separate memory image and file descriptor space, increasing expense from an operating system point of view but
better isolating the threads and making it much more difficult for them to crash a main monitor thread.
However, all of these patterns, which you can learn about in the documentation to the threading and
multiprocessing modules as well as in books and guides to Python concurrency, share the same essential feature:
dedicating a somewhat expensive operating system–visible thread of control to every connected client, whether or
not that client is busy making requests at the moment. But since your server code can remain unchanged while being
put under the control of several threads (assuming that each thread establishes its own database connection and
open files so that no resource coordination is needed between threads), it is simple enough to try the multithreaded
approach on your server’s workload. If it proves capable of being able to handle your request load, then its simplicity
makes it an especially attractive technique for in-house services not touched by the public, where an adversary
cannot simply open idle connections until you have exhausted your pool of threads or processes.
The Legacy SocketServer Framework
The pattern established in the previous section of using operating system–visible threads of control for handling
multiple client conversations at the same time is popular enough that there is a framework implementing the pattern
built into the Python Standard Library. While by now it is showing its age, with a 1990s design fraught with object
orientation and multiple inherited mix-ins, it is worth a quick example both to show how the multithreaded pattern
can be generalized and to make you familiar with the module, in case you ever need to maintain old code that uses it.
The socketserver module (known as SocketServer in the days of Python 2) breaks out the server pattern, which
knows how to open a listening socket and accept new client connections, from the handler pattern, which knows
how to converse over an open socket. These two patterns are combined by instantiating a server object that is given a
handler class as one of its arguments, as you can see in Listing 7-5.
Listing 7-5. Threaded Server Built Atop the Standard Library Server Pattern
#!/usr/bin/env python3
Foundations of Python Network Programming, Third Edition
https://github.com/brandon-rhodes/fopnp/blob/m/py3/chapter07/srv_legacy1.py
Uses the legacy "socketserver" Standard Library module to write a server.
from socketserver import BaseRequestHandler, TCPServer, ThreadingMixIn
import zen_utils