When using Nginx as a reverse proxy, enabling caching can dramatically improve performance by reducing backend load and speeding up response times. However, there are situations where you may not want to clear the entire cache, but instead only remove a cached copy of a specific page. This guide shows you how to configure Nginx caching, generate cache files, and locate them on disk so you can purge individual items without dropping the full cache.
In this tutorial you will learn:
- How to configure Nginx proxy caching
- How to locate and remove specific cache files

| Category | Requirements, Conventions or Software Version Used |
|---|---|
| System | Ubuntu/Debian-based Linux system |
| Software | Nginx, curl, Python3 |
| Other | A working backend service (simple Python HTTP server in this tutorial) |
| Conventions | # – requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command $ – requires given linux commands to be executed as a regular non-privileged user |
DID YOU KNOW?
Nginx names each cached page after an MD5 hash of the cache key (by default derived from the upstream request). Thatβs why you can safely delete a single page cache file without flushing the entire cache.
Configure and Locate Nginx Cache Files
In this tutorial, we will install the required packages, set up Nginx caching, run a simple backend server, and then demonstrate how to generate and locate cache files. The main use case is to allow administrators to remove or refresh a cached page without flushing the entire cache directory.
- Install required packages: Update your system and install nginx, curl, and Python3.
$ sudo apt update $ sudo apt install nginx curl python3
This ensures your system has the necessary components to set up caching and test with a backend service.
- Setup Nginx cache configuration: Edit /etc/nginx/nginx.conf and define a cache path.
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=mycache:10m;
Then configure your server block:
server { listen 80; location / { proxy_cache mycache; proxy_pass http://127.0.0.1:8080; proxy_cache_valid 200 10m; add_header X-Cache-Status $upstream_cache_status; } }Create the cache directory and reload Nginx:
sudo mkdir -p /var/cache/nginx $ sudo chown www-data:www-data /var/cache/nginx $ sudo nginx -t $ sudo systemctl reload nginx
- Start a backend server: Run a simple Python HTTP server to act as the backend.
cd /tmp $ echo "Hello from backend server" > index.html $ python3 -m http.server 8080
This provides content that Nginx can cache.
- Generate and verify cache: Request the page through Nginx and confirm caching.
curl http://localhost/ $ curl -I http://localhost/ | grep X-Cache-Status
The first request will MISS, but subsequent requests should HIT the cache.
IMPORTANT: CACHE KEY BEHAVIOR
By default, Nginx computes proxy_cache_key using the upstream target (e.g., a request to http://localhost/ that is proxy_pass‘d to http://127.0.0.1:8080/ will be cached under the 127.0.0.1:8080 key). This is why removing the file for the 127.0.0.1:8080 hash also affects localhost responses. - Find and remove cache file for root page: Nginx uses an MD5 hash of the cache key to generate cache filenames.
$ echo -n "http://127.0.0.1:8080/" | md5sum | awk '{print $1}' ee71830d485aca6abfca0ebb34561f72 $ find /var/cache/nginx -type f -name ee71830d485aca6abfca0ebb34561f72 $ rm -f /var/cache/nginx/2/f7/ee71830d485aca6abfca0ebb34561f72 $ curl -I http://localhost/ | grep X-Cache-Status # MISS $ curl -I http://localhost/ | grep X-Cache-Status # HITThe file is recreated after the first MISS, proving that you only removed one page’s cache.
- Find and remove cache file for index.html: The same principle applies if you explicitly request /index.html.
$ curl -I http://localhost/index.html | grep X-Cache-Status # MISS $ curl -I http://localhost/index.html | grep X-Cache-Status # HIT $ echo -n "http://127.0.0.1:8080/index.html" | md5sum | awk '{print $1}' 678f88e4e619da57ac27568d6c5c7120 $ find /var/cache/nginx -type f -name 678f88e4e619da57ac27568d6c5c7120 $ rm -f /var/cache/nginx/0/12/678f88e4e619da57ac27568d6c5c7120 $ curl -I http://localhost/index.html | grep X-Cache-Status # MISS $ curl -I http://localhost/index.html | grep X-Cache-Status # HITHere you can see / and /index.html are cached separately, each with its own unique hash and file on disk.

Locate the Nginx’s cache file for a specific URL. In this example simple http://localhost/index.html
Conclusion
By following this tutorial, you learned how to enable Nginx proxy caching, generate cache entries, and locate the exact cache file stored on disk. You also saw practical examples for both the root path and /index.html. This is particularly useful when you want to clear or refresh the cache for one specific page while preserving the rest of your cached content.
