Foundations of Python Network Programming

(WallPaper) #1
Chapter 7 ■ Server arChiteCture

133

One last time, you see that asynchronous frameworks, unless they do invisible magic like gevent or eventlet
(which are currently Python 2 only), force you to write your server code using different idioms than you use in a simple
server like the one shown in Listing 7-3. While multithreading and multiprocessing simply ran your single-threaded code
without modification, an asynchronous approach forces you to break up your code into little pieces that can each run
without ever blocking. A callback style forces each unblockable code snippet to live inside a method; a coroutine style
has you wedge each basic unblockable operation in between yield or yield from statements.


The Best of Both Worlds

These asynchronous servers can switch nimbly between one client’s traffic and another’s by simply glancing from
one protocol object to another (or, in the case of the more primitive Listing 7-6, between one dictionary entry and
another). This can serve clients with far less expense than when the operating system needs to be involved in the
context switches.
But an asynchronous server has a hard limit. Precisely because it does all of its work within a single operating
system thread, it hits a wall and can process no further client work once it has maxed out the CPU that it is running
at 100 percent utilization. It is a pattern, at least in its pristine form, which is always confined to a single processor
regardless of how many cores your server features.
Fortunately, a solution is ready at hand. When you need high performance, write your service using an
asynchronous callback object or coroutine and launch it under an asynchronous framework. Then step back and
configure your server operating system to start as many of these event loop processes as you have CPU cores!
(Consult with your server administrator about one detail: should you leave one or two cores free for the operating
system instead of occupying them all?) You will now have the best of both worlds. On a given CPU, the asynchronous
framework can blaze away, swapping between active client sockets as often as its heart desires without incurring a
single context switch into another process. But the operating system can distribute new incoming connections among
all of the active server processes, ideally balancing the load adequately across the entire server.
As discussed in the section “A Few Words About Deployment,” you will probably want to corral these processes
inside a daemon that can monitor their health and restart them, or notify staff, if they fail. Any of the mechanisms
discussed there should work just fine for an asynchronous service, from supervisord all the way up to full
platform-as-a-service containerization.


Running Under inetd


I should not close this chapter without mentioning the venerable inetd daemon, available for nearly all BSD and
Linux distributions. Invented in the early days of the Internet, it solves the problem of needing to start n different
daemons when the system boots if you want to offer n different network services on a given server machine.
In its /etc/inetd.conf file, you simply list every port that you want listening on the machine.
The inetd daemon does a bind() and listen() on every one of them, but it kicks off a server process only if a
client actually connects. This pattern makes it easy to support low-port-number services that run under a normal
user account, since inetd itself is the process that is opening the low-numbered port. For a TCP service like the one in
this chapter (see your inetd(8) documentation for the more complicated case of a UDP datagram service), the inetd
daemon can either launch one process per client connection or expect your server to stay up and continue listening
for new connections once it has accepted the first one.
Creating one process per connection is more expensive and presents the server with a higher load, but it is also
simpler. Single-shot services are designated by the string nowait in the fourth field of a service’s inetd.conf entry.


1060 stream tcp nowait brandon /usr/bin/python3 /usr/bin/python3 in_zen1.py


Such a service will start up and find that its standard input, output, and error are already connected to the client
socket. The service needs to converse only with that one client and then exit. Listing 7-10 gives an example, which can
be used in conjunction with the inetd.conf line just given.

Free download pdf