Chapter 5
[ 109 ]
Care should be taken when doing this, though, to prevent multiple response
bodies from being cached for the same URI. This can happen when public content
inadvertently has the Set-Cookie header set for it, and this then becomes part of
the key used to access this data. Separating public content out to a different location
is one way to ensure that the cache is being used effectively. For example, serving
images from an /img location where a different proxy_cache_key is defined:
server {
proxy_ignore_headers Set-Cookie;
location /img {
proxy_cache_key "$host$request_uri";
proxy_pass http://upstream;
}
location / {
proxy_cache_key "$host$request_uri $cookie_user";
proxy_pass http://upstream;
}
}
Storing
Related to the concept of a cache is a store. If you are serving large, static files that
will never change, that is, there is no reason to expire the entries, then NGINX offers
something called a store to help serve these files faster. NGINX will store a local copy
of any files that you configure it to fetch. These files will remain on disk and the
upstream server will not be asked for them again. If any of these files should change
upstream, they need to be deleted by some external process, or NGINX will continue
serving them, so for smaller, static files, using the cache is more appropriate.