Nginx Client Timeouts
Introduction
When running a web server, managing timeouts properly is crucial for maintaining performance, security, and reliability. Nginx, a popular web server and reverse proxy, offers several timeout settings that control how long connections can remain open before being closed. These settings help prevent resource exhaustion, protect against slow-loris attacks, and ensure your server remains responsive under various traffic conditions.
In this guide, we'll explore Nginx client timeout directives, understand their purpose, learn how to configure them optimally, and see real-world examples of timeout configurations for different use cases.
Understanding Nginx Client Timeouts
Nginx uses several different timeout directives to control various aspects of client connections. Let's break down the most important ones:
Key Timeout Directives
- client_body_timeout: Defines how long Nginx will wait for the client to send the request body.
- client_header_timeout: Specifies how long Nginx will wait for the client to send the complete request headers.
- keepalive_timeout: Controls how long a keepalive connection stays open.
- send_timeout: Sets the timeout for transmitting a response to the client.
These timeouts work together to control the entire lifecycle of a client connection, from initial headers to complete response delivery.
Configuring Client Timeouts in Nginx
Let's look at how to configure these timeout settings in your Nginx configuration. Timeouts are typically defined in seconds, but you can also use time units like ms
(milliseconds), s
(seconds), m
(minutes), and h
(hours).
Basic Timeout Configuration
Here's a basic configuration example with default-like values:
http {
# Wait up to 60 seconds for client to send request headers
client_header_timeout 60s;
# Wait up to 60 seconds for client to send request body
client_body_timeout 60s;
# Keep connections alive for 75 seconds after completion
keepalive_timeout 75s;
# Wait up to 60 seconds when sending data to client
send_timeout 60s;
# ... other configuration ...
}
Where to Set Timeout Directives
Timeout directives can be set at different levels:
- http: Applies to all servers
- server: Applies to a specific server (virtual host)
- location: Applies to specific URL paths
Example of different levels:
http {
# Global default
client_header_timeout 60s;
server {
listen 80;
server_name example.com;
# Override for this server
client_header_timeout 30s;
location /api/ {
# Even more specific for API requests
client_header_timeout 120s;
}
}
}
Real-World Timeout Optimizations
Let's explore some practical scenarios and how to adjust timeouts accordingly.
Scenario 1: High-Traffic Web Server
For high-traffic websites serving mostly static content, shorter timeouts are often beneficial:
http {
# Shorter timeouts to handle more connections
client_header_timeout 10s;
client_body_timeout 10s;
keepalive_timeout 30s;
send_timeout 10s;
# Enable keepalive connections but limit their number
keepalive_requests 100;
}
Scenario 2: API Server with Large Uploads
For servers handling large file uploads or API requests that take longer to process:
http {
# More generous timeouts for uploading large files
client_body_timeout 300s;
client_header_timeout 60s;
send_timeout 300s;
# Increased buffer size for request body
client_body_buffer_size 10m;
client_max_body_size 100m;
}
Scenario 3: Reverse Proxy for Slow Backend Applications
When Nginx acts as a reverse proxy for applications that might take longer to respond:
http {
# Standard client timeouts
client_header_timeout 60s;
client_body_timeout 60s;
# Longer proxy timeouts for slow backends
proxy_connect_timeout 60s;
proxy_send_timeout 120s;
proxy_read_timeout 300s;
}
Advanced Timeout Considerations
Handling WebSocket Connections
WebSockets require longer timeouts since they maintain persistent connections:
location /websocket/ {
proxy_pass http://websocket_backend;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
# Much longer timeouts for WebSocket connections
proxy_read_timeout 3600s;
proxy_send_timeout 3600s;
}
Detecting and Preventing Timeout Issues
Here's how you can identify timeout-related problems in your Nginx logs:
- Look for status codes like
504 Gateway Timeout
or408 Request Timeout
- Check for entries containing
client timed out
orupstream timed out
Example log entry showing a timeout:
2023/04/15 12:34:56 [warn] 12345#0: *123 client timed out (110: Connection timed out) while reading client request headers, client: 192.168.1.2, server: example.com
Testing Your Timeout Configuration
You can use tools like curl
to test your timeout settings:
# Test client_header_timeout by sending headers slowly
curl -v --limit-rate 50 http://your-server.com/
# Test client_body_timeout with slow POST request
curl -v --limit-rate 50 -X POST -d "data" http://your-server.com/
Impact of Timeouts on Performance
Adjusting timeouts can significantly impact your server's performance:
Timeout Setting | Too Short | Too Long | Recommendation |
---|---|---|---|
client_header_timeout | Legitimate slow clients get disconnected | Server resources tied up by slow/malicious clients | 10-60s depending on client types |
client_body_timeout | File uploads may fail | Server vulnerable to slow POST attacks | 10-120s based on expected request sizes |
keepalive_timeout | Higher connection overhead, more TCP handshakes | Too many idle connections consume resources | 30-65s for most web applications |
send_timeout | Large downloads may fail | Zombie connections tie up worker processes | 30-60s for typical content |
Common Issues and Solutions
Problem: Too Many 504 Gateway Timeout Errors
Solution: Increase proxy_read_timeout
and proxy_send_timeout
:
location /api/ {
proxy_pass http://backend;
proxy_read_timeout 300s; # Increased from default
proxy_send_timeout 300s; # Increased from default
}
Problem: Server Running Out of Worker Connections
Solution: Reduce keepalive timeout and limit keepalive requests:
http {
keepalive_timeout 30s; # Reduced from default
keepalive_requests 100; # Limit requests per connection
reset_timedout_connection on; # Free resources immediately on timeout
}
Problem: Slow File Uploads Failing
Solution: Increase body timeout and adjust related settings:
http {
client_body_timeout 300s;
client_max_body_size 100m;
client_body_buffer_size 2m;
}
Summary
Properly configured client timeouts in Nginx are essential for:
- Performance optimization: Balancing resource usage and client needs
- Security: Protecting against slow HTTP attacks
- Reliability: Ensuring connections are properly managed
The ideal timeout values depend on your specific use case, traffic patterns, and application requirements. Start with sensible defaults, monitor your server's behavior, and adjust accordingly.
Remember that timeout optimization is an ongoing process that should adapt to changing traffic patterns and application requirements.
Additional Resources
- Nginx Official Documentation on Core Module
- Tuning Nginx for Performance
- Detecting and Preventing Slow HTTP Attacks
Practice Exercises
- Configure an Nginx server with different timeout settings and use
curl
with rate limiting to test when timeouts occur. - Analyze your current Nginx logs to identify any timeout-related issues.
- Create specialized timeout configurations for different parts of your website based on expected usage patterns.
If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)