TL;DR
- Don’t paste extreme lab values into prod (e.g.,
keepalive_requests 100000
, multi-minute timeouts, giant buffers). - Tune by evidence (benchmarks + metrics) and by workload (static, API, reverse proxy, TLS/HTTP/2/3).
- Account for the layer in front of NGINX (CDN/WAF/LB): set timeouts and real-IP accordingly; configure rate limits using
X-Forwarded-For
if a proxy sits in front. - OS limits (FDs,
nofile
) andsystemd
often matter more than dozens of micro-tweaks.
What the Gist gets right (and how to apply it)
worker_processes auto;
✅ perfect on modern NGINX.- Higher
worker_connections
✅ yes, but measure: 4k can be fine for API workloads; with TLS/HTTP2 don’t just crank it up blindly. sendfile on; tcp_nopush on; tcp_nodelay on;
✅ solid defaults for static serving.open_file_cache
✅ useful for static sites with many files; benchmark and watch cache hit/miss.- Rate limiting (
limit_req
,limit_conn
) ✅ but behind a proxy use the real client IP (see below).
Settings to tone down in production
keepalive_requests 100000;
❌ too high. Use 100–1000, workload-dependent.send_timeout 2;
❌ aggressive (kills slow clients). Start at 10–15s.client_*_timeout
in minutes ❌ for most apps. Keep 15–60s based on behavior.- Giant header/body buffers ❌ can create a memory DoS vector; stick to defaults unless you truly need larger headers.
Nuance people rarely mention
- With Cloudflare free (and many ALBs), the backend has ≈100s before a 524—don’t set higher upstream timeouts.
- On-the-fly gzip burns CPU; prefer
gzip_static on;
(precompressed assets) or Brotli/Zstd if available. - HTTP/2 is almost always a win; HTTP/3 is optional—enable if your mobile/global profile benefits and your LB supports it.
- BBR (kernel) can improve TTFB/latency for high-RTT clients; test, don’t assume.
Production-safe base template (web/proxy with TLS)
Replace cert paths, domains, and locations. Uncomment HTTP/3 if you’ve built NGINX with QUIC and opened UDP/443.
# /etc/nginx/nginx.conf
user nginx;
worker_processes auto;
worker_rlimit_nofile 200000;
error_log /var/log/nginx/error.log warn;
pid /run/nginx.pid;
events {
worker_connections 4096;
multi_accept on; # can help under heavy concurrent accept
# SO_REUSEPORT can be enabled per 'listen' in server blocks on newer kernels
}
http {
include mime.types;
default_type application/octet-stream;
# Access log buffered (don’t fully disable logs in prod)
access_log /var/log/nginx/access.log main buffer=32k flush=1s;
# I/O
sendfile on;
tcp_nopush on; # coalesce headers
tcp_nodelay on;
# Keepalive
keepalive_timeout 15s;
keepalive_requests 500;
# Timeouts
client_body_timeout 30s;
client_header_timeout 15s;
send_timeout 15s;
reset_timedout_connection on;
# Open file descriptor caching
open_file_cache max=100000 inactive=20s;
open_file_cache_valid 30s;
open_file_cache_min_uses 2;
open_file_cache_errors on;
# Compression (prefer static precompression if possible)
gzip on;
gzip_comp_level 2;
gzip_min_length 1024;
gzip_vary on;
gzip_disable "msie6";
gzip_types
text/plain text/css text/xml application/xml
application/json application/javascript application/rss+xml
image/svg+xml;
# gzip_static on; # if you ship .gz assets at build time
# Security baseline
server_tokens off;
# Real IP if behind LB/CDN (uncomment & set your ranges)
# set_real_ip_from 10.0.0.0/8; # your LB ranges or CDN ranges
# real_ip_header X-Forwarded-For;
# real_ip_recursive on;
# Rate limiting (will use $binary_remote_addr unless you map real IP; see below)
limit_conn_zone $binary_remote_addr zone=conn_ip:10m;
limit_req_zone $binary_remote_addr zone=req_ip:10m rate=10r/s;
# TLS (modern)
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'TLS_AES_256_GCM_SHA384:TLS_AES_128_GCM_SHA256:TLS_CHACHA20_POLY1305_SHA256:HIGH:!aNULL:!MD5';
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:50m;
ssl_session_timeout 1d;
ssl_session_tickets off;
# ssl_stapling on; ssl_stapling_verify on; # enable with proper chain & DNS
server {
listen 80;
listen 443 ssl http2; # add http3 if applicable: 'http3 reuseport'
server_name example.com;
# Certs
ssl_certificate /etc/ssl/certs/fullchain.pem;
ssl_certificate_key /etc/ssl/private/privkey.pem;
# Per-IP limits (tune)
limit_conn conn_ip 20;
limit_req zone=req_ip burst=20 nodelay;
root /var/www/html;
index index.html;
location / {
try_files $uri $uri/ =404;
}
# Security headers (tune CSP to your app)
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
add_header Referrer-Policy no-referrer-when-downgrade;
add_header Permissions-Policy "geolocation=(), microphone=()";
# add_header Content-Security-Policy "default-src 'self';" always;
# Reverse proxy buffer tuning (uncomment for upstreams)
# proxy_set_header Host $host;
# proxy_set_header X-Real-IP $remote_addr;
# proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# proxy_http_version 1.1;
# proxy_read_timeout 30s;
# proxy_buffering on;
# proxy_buffers 16 16k;
# proxy_busy_buffers_size 32k;
# HTTP/3 (QUIC) optional (requires quic build + UDP/443 open)
# listen 443 quic reuseport;
# add_header Alt-Svc 'h3=":443"; ma=86400' always;
}
}
Code language: PHP (php)
Reverse proxy/API? Define
upstream
, setproxy_set_header
lines, and tuneproxy_read_timeout
based on upstream expectations and your CDN/LB limits.
Correct rate limiting behind a proxy/CDN
If a CDN/LB sits in front, don’t rate-limit by $binary_remote_addr
(you’ll be limiting the proxy). Use the real client IP:
# Trust real IP from these proxies (example: Cloudflare ranges)
set_real_ip_from 103.21.244.0/22;
set_real_ip_from 103.22.200.0/22;
real_ip_header X-Forwarded-For;
real_ip_recursive on;
map $http_x_forwarded_for $client_ip {
~^(?P<addr>\d+\.\d+\.\d+\.\d+) $addr;
default $remote_addr;
}
limit_req_zone $client_ip zone=req_ip:10m rate=10r/s;
limit_conn_zone $client_ip zone=conn_ip:10m;
Code language: PHP (php)
File descriptor limits & systemd
(Linux)
Raise nofile
via systemd, not just worker_rlimit_nofile
:
sudo mkdir -p /etc/systemd/system/nginx.service.d
cat <<'EOF' | sudo tee /etc/systemd/system/nginx.service.d/limits.conf
[Service]
LimitNOFILE=300000
EOF
sudo systemctl daemon-reload
sudo systemctl restart nginx
Code language: JavaScript (javascript)
SELinux: if you need setrlimit
, setsebool -P httpd_setrlimit 1
.
HTTP/2, HTTP/3, and TLS
- HTTP/2: enable with
listen 443 ssl http2;
. - HTTP/3/QUIC: use an NGINX build with QUIC, open UDP/443, and advertise Alt-Svc.
- TLS: restrict to TLS 1.2/1.3, disable tickets, use modern curves/ciphers. Audit your client base before going ultra-strict.
Brotli / Zstd vs Gzip
- Production: precompress static assets (
gzip_static on;
or Brotli static) during build to reduce CPU and improve TTFB. - Dynamic: if you must compress on the fly, keep low levels (1–2 for gzip) and watch CPU. Brotli 3–5 can pay off on HTML/CSS/JS.
BBR (kernel) and queuing
If your audience has high RTT (mobile/global), enable BBR and FQ:
modprobe tcp_bbr
echo 'tcp_bbr' | sudo tee /etc/modules-load.d/bbr.conf
cat <<'EOF' | sudo tee /etc/sysctl.d/99-bbr.conf
net.ipv4.tcp_congestion_control=bbr
net.core.default_qdisc=fq
EOF
sudo sysctl --system
Code language: PHP (php)
Production don’ts
- ❌
keepalive_requests 100000
and multi-minute timeouts “just because”. - ❌ Monster buffers “just in case” (easy memory DoS).
- ❌ Limiting by proxy IP (without
real_ip_*
). - ❌ Permanently disabling access logs: buffer them; you’ll need traceability.
- ❌ Tuning without measurement: use wrk/vegeta, watch p50/p95/p99, active conns, CPU.
Test & deploy
nginx -t # syntax check
sudo systemctl reload nginx # zero-downtime reload
# If it fails, rollback to the previous config
Code language: PHP (php)
Useful tools: wrk
/ vegeta
(load), ss -s
(sockets), ngxtop
, Prometheus + nginx exporter.
Bottom line
Denji’s Gist is a good lab starting point; production demands observation. Start with sane defaults, measure, and iterate.
Tell me your scenario (big static, API, TLS+HTTP/2/3, behind CDN or not, typical & peak traffic), and I’ll provide tighter initial values and a bespoke template for your use case.
source: GitHub