Chapter 5
[ 103 ]
So, NGINX would be able to handle just under 25,000 simultaneous, active connections
in its default configuration, assuming that these buffers will be constantly filled. There
are a number of other factors that come into play, such as cached content and idle
connections, but this gives us a good ballpark estimate to work with.
Now, if we take the following numbers as our average request and response sizes,
we see that eight 4 KB buffers just aren't enough to process a typical request. We
want NGINX to buffer as much of the response as possible so that the user receives
it all at once, provided the user is on a fast link.
- Average request size: 800 bytes
- Average response size: 900 KB
The tuning examples in the rest of this section will use more
memory at the expense of concurrent, active connections.
They are optimizations, and shouldn't be understood as
recommendations for a general configuration. NGINX is
already optimally tuned to provide for many, slow clients
and a few, fast upstream servers. As the trend in computing
is more towards mobile users, the client connection is
considerably slower than a broadband user's connection.
So, it's important to know your users and how they will be
connecting, before embarking on any optimizations.
We would adjust our buffer sizes accordingly so that the whole response would fit
in the buffers:
http {
proxy_buffers 30 32k;
}
This means, of course, that we would be able to handle far fewer concurrent users.
Thirty 32 KB buffers is 983,040 bytes (30 32 1024) per connection.
The 768 MB we allocated to NGINX is 805,306,368 bytes (768 1024 1024).
Dividing the two, we come up with 805306368 / 983040 = 819.2 active connections.