Foundations of Python Network Programming

(WallPaper) #1
Chapter 7 ■ Server arChiteCture

125

class ZenHandler(BaseRequestHandler):
def handle(self):
zen_utils.handle_conversation(self.request, self.client_address)


class ZenServer(ThreadingMixIn, TCPServer):
allow_reuse_address = 1


address_family = socket.AF_INET6 # uncomment if you need IPv6


if name == 'main':
address = zen_utils.parse_command_line('legacy "SocketServer" server')
server = ZenServer(address, ZenHandler)
server.serve_forever()


By substituting ForkingMixIn for ThreadingMixIn, the programmer can instead have fully isolated processes
serve incoming clients instead of threads.
The vast weakness of this approach should be apparent by comparing it with the earlier Listing 7-4, which started
a fixed number of threads that could be chosen by a server administrator based on how many threads of control a
given server and operating system can easily manage without a significant degradation in performance. Listing 7-5,
by contrast, lets the pool of connecting clients determine how many threads are started—with no limit on how many
threads ultimately wind up running on the server! This makes it easy for an attacker to bring the server to its knees.
This Standard Library module, therefore, cannot be recommended for production and customer-facing services.


Async Servers


How can you keep the CPU busy during the delay between sending an answer to a client and then receiving its next
request without incurring the expense of an operating system–visible thread of control per client? The answer is that
you can write your server using an asynchronous pattern, where instead of blocking and waiting for data to arrive or
depart from one particular client, the code instead is willing to hear from a whole list of waiting client sockets and
respond whenever one of those clients is ready for more interaction.
This pattern is made possible by two features of modern operating system network stacks. The first is that they
offer a system call that lets a process block waiting on a whole list of client sockets, instead of on only a single client
socket, which allows a single thread to serve hundreds or thousands of client sockets at a time. The second feature is
that a socket can be configured as nonblocking, where it promises to never, ever make the calling thread block in a
send() or recv() call but will always return from the send() or recv() system call immediately whether or not further
progress can be made in the conversation. If progress is delayed, then it is up to the caller to try again later when the
client looks ready for further interaction.
The name asynchronous means that the client code never stops to wait for a particular client and that the
thread of control running the code is not synchronized, or made to wait in lockstep, with the conversation of any one
particular client. Instead, it switches freely among all connected clients to do the work of serving.
There are several calls by which operating systems support asynchronous mode. The oldest is the POSIX call
select(), but it suffers from several inefficiencies that have inspired modern replacements like poll() on Linux and
epoll() on BSD. The book UNIX Network Programming by W. Richard Stevens (Prentice Hall, 2003) is the standard
reference on the subject. Here I will focus on poll() and skip the others because the intention of this chapter is
not really that you implement your own asynchronous control loop. Instead, you are taking a poll()-powered loop
merely as an example so that you understand what happens under the hood of a full asynchronous framework, which
is how you will really want to implement asynchrony in your programs. Several frameworks are illustrated in the
following sections.

Free download pdf