Mastering Nginx

(Ron) #1

Reverse Proxy


Advanced Topics


As we saw in the previous chapter, a reverse proxy makes connections to upstream


servers on behalf of clients. These upstream servers therefore have no direct connection
to the client. This is for several different reasons, such as security, scalability,


and performance.


A reverse proxy server aids security because if an attacker were to try to get onto


the upstream server directly, he would have to first find a way to get onto the
reverse proxy. Connections to the client can be encrypted by running them over


HTTPS. These SSL connections may be terminated on the reverse proxy, when the
upstream server cannot or should not provide this functionality itself. NGINX can


act as an SSL terminator as well as provide additional access lists and restrictions


based on various client attributes.


Scalability can be achieved by utilizing a reverse proxy to make parallel connections to
multiple upstream servers, enabling them to act as if they were one. If the application


requires more processing power, additional upstream servers can be added to the pool


served by a single reverse proxy.


Performance of an application may be enhanced through the use of a reverse proxy
in several ways. The reverse proxy can cache and compress content before delivering


it out to the client. NGINX as a reverse proxy can handle more concurrent client


connections than a typical application server. Certain architectures configure NGINX
to serve static content from a local disk cache, passing only dynamic requests to


the upstream server to handle. Clients can keep their connections to NGINX alive,
while NGINX terminates the ones to the upstream servers immediately, thus freeing


resources on those upstream servers.

Free download pdf