Nginx Reverse Proxy Configuration: Complete Guide
Misconfigured reverse proxies are a leading cause of web outages, security issues, and debugging headaches. Yet, when done right, Nginx reverse proxy setups are the backbone of scalable, secure, and fast architectures. This guide cuts through the noise with complete, working examples, production-proven best practices, and honest trade-offs. Copy, paste, and deploy—then dig into the real-world reasoning behind each configuration.
Key Takeaways:
- Nginx reverse proxy setups centralize security, SSL, and routing—making your stack simpler and safer.
- Headers, timeouts, and protocol forwarding often break apps in production—copy the tested settings below.
- Nginx offers multiple load balancing and caching strategies with clear trade-offs for scale, stickiness, and fault-tolerance.
- Most outages are caused by health check gaps, improper SSL handling, or missing X-Forwarded headers.
- Automate certificate renewal, monitor logs, and avoid stale cache bugs to keep your stack resilient.
Why Use Nginx as a Reverse Proxy?
Nginx has dominated the reverse proxy space for years thanks to its event-driven architecture and lightweight footprint. Here’s what makes it the default choice for most production environments:
- SSL/TLS Termination: Offload expensive crypto operations from your backend apps.
- Centralized Authentication & Security: Apply WAF rules, IP filtering, and rate limiting before traffic hits your app.
- Load Balancing: Distribute requests across a pool of backend servers with a choice of algorithms.
- Caching: Reduce backend load with flexible, granular HTTP caching.
- Header and Protocol Manipulation: Rewrite and forward headers to keep apps and logs accurate.
Alternatives like HAProxy, Caddy, and Traefik exist, but Nginx’s balance of performance, configurability, and ecosystem support make it the first stop for most teams. For an in-depth comparison between Nginx and other reverse proxies, check out the SesameDisk Nginx Reverse Proxy Guide (2023).
Basic Reverse Proxy Setup
The most common Nginx reverse proxy pattern is to sit in front of a backend (Node.js, Python, PHP, Go, etc.), forwarding HTTP requests and passing along crucial client information.
# Nginx 1.22+
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
# Test with: curl -H 'Host: api.example.com' http://nginx-server/
# Expected: Response from backend at 127.0.0.1:3000
Explanation:
proxy_passtells Nginx where to send the request.proxy_set_headerlines ensure the backend knows the original client IP, protocol, and host—vital for logging, redirects, and app logic.
Why it matters: Many apps use req.headers['host'] or req.headers['x-forwarded-for'] to determine canonical URLs or user IPs. If these aren’t forwarded, you’ll see bugs with redirects, authentication flows, and user tracking.
Edge Case: Path Handling
Consider this backend route:
location /api/ {
proxy_pass http://localhost:4000/;
}
# /api/users → http://localhost:4000/users
If you drop the trailing slash in proxy_pass, Nginx will append the entire original path—leading to 404s if your backend expects the path to be rewritten. Double-check Nginx’s path handling docs for specifics.
Real-World SSL Termination
SSL/TLS termination is where Nginx shines—and where mistakes are common. Here’s a robust, production-ready setup:
server {
listen 443 ssl http2;
server_name app.example.com;
ssl_certificate /etc/letsencrypt/live/app.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/app.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
# Strong security headers
add_header Strict-Transport-Security "max-age=63072000; includeSubDomains" always;
location / {
proxy_pass http://127.0.0.1:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_http_version 1.1;
proxy_set_header Connection "";
}
}
server {
listen 80;
server_name app.example.com;
return 301 https://$host$request_uri;
}
- Automate SSL renewal using Certbot to avoid downtime.
- Always set
X-Forwarded-Prototohttpsso your backend can generate correct URLs and cookies. - Force HTTP to HTTPS with a 301 redirect to prevent insecure traffic leaks.
- Enable HTTP/2 for faster SSL connections—modern browsers expect it.
For a real-world deployment, refer to the SesameDisk Nginx Reverse Proxy, Load Balancing & TLS Guide.
Load Balancing Strategies and Health Checks
Nginx supports several load balancing algorithms. Here’s a practical comparison:
| Method | How It Works | Best For | Trade-offs |
|---|---|---|---|
| Round Robin (default) | Distributes requests evenly | General purpose | No session stickiness |
| Least Connections | Sends new requests to the backend with the fewest active connections | Long-running or variable workloads | Not always “fair” if requests differ in duration |
| IP Hash | Routes requests from the same client IP to the same backend | Session persistence for stateful apps | Imbalanced if clients are unevenly distributed |
Example: Load Balancing with Passive Health Checks
upstream api_upstream {
least_conn;
server 10.0.0.11:9000 max_fails=2 fail_timeout=10s;
server 10.0.0.12:9000 max_fails=2 fail_timeout=10s;
server 10.0.0.13:9000 max_fails=2 fail_timeout=10s;
}
server {
listen 80;
server_name api.example.com;
location / {
proxy_pass http://api_upstream;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
least_conn;enables least connections load balancing.max_failsandfail_timeoutmark a backend as “down” if it fails too many times in a short period—this is passive health checking.
Limitations: Nginx OSS only supports passive health checks. For active HTTP health checks (probing /health endpoints) you’ll need Nginx Plus or a sidecar process.
Alternative: Using IP Hash for Session Stickiness
upstream frontend {
ip_hash;
server web1.internal:8080;
server web2.internal:8080;
}
# This ensures the same client IP is routed to the same backend server
Use this for web apps that store session state in memory, but note that stickiness can break if you add or remove servers frequently.
Caching, Tuning, and Performance
Proxy caching with Nginx can dramatically improve throughput and resilience, but it introduces edge cases in invalidation and consistency. Here’s a robust cache config for an API:
proxy_cache_path /var/cache/nginx/api levels=1:2 keys_zone=api_cache:50m max_size=5g inactive=10m use_temp_path=off;
server {
listen 80;
server_name api-cache.example.com;
location /v1/ {
proxy_cache api_cache;
proxy_cache_valid 200 10m;
proxy_cache_valid 404 1m;
proxy_cache_bypass $http_cache_control;
proxy_no_cache $http_cache_control;
proxy_cache_use_stale error timeout updating;
proxy_pass http://127.0.0.1:5000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
proxy_cache_path: Defines cache storage and limits.proxy_cache: Activates caching for this location.proxy_cache_valid: Sets TTL for different status codes.proxy_cache_bypassandproxy_no_cache: Respect clientCache-Controlheaders for cache bypass.proxy_cache_use_stale: Serves stale data on backend error—crucial for high-availability APIs.
Performance tuning:
- Increase
proxy_buffer_sizeandproxy_buffersfor large responses. - Enable
gzipcompression for static assets and API payloads:
gzip on;
gzip_types application/json text/css application/javascript;
gzip_min_length 1024;
Real-World Data: In production, Nginx caching can reduce backend API load by 60–80% for cacheable endpoints (see SesameDisk Guide, 2023).
Common Pitfalls and Pro Tips
- Headers not forwarded: Forgetting
proxy_set_header Host $host;orX-Forwarded-*breaks authentication, logging, and geo-IP in most frameworks. - Timeout mismatches: The defaults (
proxy_read_timeout60s) may be too short for slow backends. Set timeouts explicitly:proxy_connect_timeout 5s; proxy_send_timeout 30s; proxy_read_timeout 60s; - SSL renewal lapses: Automate Certbot or acme.sh renewals. Monitor expiry with a script or external service.
- Cache staleness: Without proper invalidation or TTL, you’ll serve outdated data. Use short TTLs for dynamic content and expose cache-busting headers to clients.
- Passive health checks miss slow failures: Nginx marks a server “down” only after repeated failed connections. For real-time health, use Nginx Plus or integrate a watcher that rewrites the upstream config.
- Logging: Enable access and error logs at a granular level. Use
log_formatto include$upstream_addrand$upstream_statusfor debugging:log_format upstreamlog '$remote_addr - $host [$time_local] ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" ' '$upstream_addr $upstream_status'; access_log /var/log/nginx/access.log upstreamlog;
For more production-tested tips, see Nginx Reverse Proxy, Load Balancing & TLS Guide (SesameDisk, 2023).
Conclusion and Next Steps
Nginx remains the gold standard for reverse proxying thanks to its performance, flexibility, and ecosystem. Configuring it well is about more than “just” proxy_pass: headers, timeouts, SSL, and load balancing details make or break real deployments. Use the cut-and-paste configs above as your foundation, then:
- Automate SSL with Certbot and monitor expiry dates.
- Set up log monitoring with tools like ELK Stack or Grafana Loki.
- Test failover by simulating backend outages—watch how Nginx handles fail_timeout and passive health checks.
- Benchmark and tune
proxy_buffers, cache size, and timeouts for your real workload. - Read the detailed Nginx Reverse Proxy Guide for hands-on case studies and advanced scenarios.
Master the basics, test under load, and you’ll avoid the most common production disasters. For deeper dives into Nginx with Docker, Kubernetes, or microservice architectures, check back for upcoming guides on SesameDisk.

