Express API Rate Limiting
Introduction
Rate limiting is a critical strategy in API design that restricts how many requests a user can make to your API within a specified timeframe. Properly implemented rate limiting:
- Protects your API from abuse and DDoS attacks
- Ensures fair usage across all clients
- Prevents server overload
- Reduces costs for APIs deployed on paid infrastructure
- Improves overall availability and reliability
In this tutorial, we'll explore how to implement rate limiting in Express applications, understand different rate limiting strategies, and see how to customize the implementation to suit your specific application needs.
What is Rate Limiting?
Rate limiting controls the amount of incoming and outgoing traffic to or from a network. In the context of REST APIs, it means setting a threshold on how many requests a client can make within a specific timeframe.
For example, you might want to limit users to:
- 100 requests per hour
- 1000 requests per day
- 5 requests per second
If a user exceeds these limits, the API server returns an HTTP 429 (Too Many Requests) response, indicating that they've hit their rate limit.
Implementing Rate Limiting in Express
Let's implement rate limiting in an Express application using the popular express-rate-limit
package.
Step 1: Install the Required Package
npm install express-rate-limit
Step 2: Basic Rate Limiter Setup
Here's a simple rate limiter that limits each IP to 100 requests per 15-minute window:
const express = require('express');
const rateLimit = require('express-rate-limit');
const app = express();
// Basic rate limiter configuration
const limiter = rateLimit({
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100, // limit each IP to 100 requests per windowMs
standardHeaders: true, // Return rate limit info in the `RateLimit-*` headers
legacyHeaders: false, // Disable the `X-RateLimit-*` headers
message: 'Too many requests from this IP, please try again after 15 minutes'
});
// Apply the rate limiter to all requests
app.use(limiter);
app.get('/', (req, res) => {
res.send('Hello World!');
});
app.listen(3000, () => {
console.log('Server is running on port 3000');
});
When you run this code and make requests:
- For the first 100 requests within 15 minutes, you'll get normal responses
- After 100 requests, you'll get a "Too many requests" message with HTTP 429 status code
Step 3: Route-Specific Rate Limiting
You might want different rate limits for different routes. For example, public endpoints might have stricter limits than authenticated ones:
const express = require('express');
const rateLimit = require('express-rate-limit');
const app = express();
// Create different limiters
const generalLimiter = rateLimit({
windowMs: 15 * 60 * 1000,
max: 100,
message: 'Too many general requests from this IP'
});
const apiLimiter = rateLimit({
windowMs: 15 * 60 * 1000,
max: 50,
message: 'Too many API requests from this IP'
});
const authLimiter = rateLimit({
windowMs: 60 * 60 * 1000, // 1 hour
max: 5,
message: 'Too many login attempts, please try again after an hour'
});
// Apply the rate limiters to different routes
app.use('/', generalLimiter); // Apply to all routes
app.use('/api/', apiLimiter); // Apply to API routes
app.use('/api/login', authLimiter); // Apply stricter limits to login endpoint
app.get('/', (req, res) => {
res.send('Public route');
});
app.get('/api/data', (req, res) => {
res.send('API data route');
});
app.post('/api/login', (req, res) => {
res.send('Login successful');
});
app.listen(3000, () => {
console.log('Server is running on port 3000');
});
In this example, we have:
- A general limit of 100 requests per 15 minutes for all routes
- A stricter limit of 50 requests per 15 minutes for API routes
- A very strict limit of 5 requests per hour for the login endpoint to prevent brute force attacks
Advanced Rate Limiting Techniques
Custom Storage
By default, express-rate-limit
uses memory to store rate limiting information, which isn't suitable for production applications with multiple server instances. Let's use Redis for distributed storage:
const express = require('express');
const rateLimit = require('express-rate-limit');
const RedisStore = require('rate-limit-redis');
const Redis = require('ioredis');
const app = express();
// Create Redis client
const redisClient = new Redis({
host: 'localhost',
port: 6379,
// password: 'your-redis-password', (if needed)
});
// Configure rate limiter with Redis storage
const limiter = rateLimit({
store: new RedisStore({
// @ts-expect-error - Known issue: the `call` function is required
sendCommand: (...args) => redisClient.call(...args),
}),
windowMs: 15 * 60 * 1000, // 15 minutes
max: 100,
standardHeaders: true,
legacyHeaders: false,
});
// Apply the rate limiter
app.use(limiter);
app.get('/', (req, res) => {
res.send('Hello World with Redis-backed rate limiting!');
});
app.listen(3000, () => {
console.log('Server is running on port 3000');
});
First, install the required packages:
npm install express-rate-limit rate-limit-redis ioredis
Using Redis ensures that rate limits are shared across all instances of your application, which is crucial in scaled environments.
Custom Keys
By default, rate limiter uses IP addresses to identify clients. In some cases, you might want a different identifier:
const express = require('express');
const rateLimit = require('express-rate-limit');
const app = express();
const apiLimiter = rateLimit({
windowMs: 15 * 60 * 1000,
max: 100,
// Use API key from request headers instead of IP
keyGenerator: function (req) {
return req.headers['x-api-key'] || req.ip;
},
message: 'Too many requests from this API key, please try again after 15 minutes'
});
app.use('/api/', apiLimiter);
app.get('/api/data', (req, res) => {
res.send('API data');
});
app.listen(3000, () => {
console.log('Server is running on port 3000');
});
This approach is useful for APIs that use API keys for authentication.
Dynamic Rate Limiting
Sometimes, you might want different rate limits for different types of users:
const express = require('express');
const rateLimit = require('express-rate-limit');
const app = express();
const dynamicLimiter = rateLimit({
windowMs: 15 * 60 * 1000,
// Dynamic max based on user type
max: (req) => {
if (req.user && req.user.isPremium) {
return 1000; // Premium users get more requests
}
return 100; // Regular users get fewer requests
},
message: 'Rate limit exceeded'
});
// Dummy authentication middleware
app.use((req, res, next) => {
// Simulate user authentication
const apiKey = req.headers['x-api-key'];
if (apiKey === 'premium-key') {
req.user = { isPremium: true };
} else {
req.user = { isPremium: false };
}
next();
});
app.use('/api/', dynamicLimiter);
app.get('/api/data', (req, res) => {
res.send(`Hello ${req.user.isPremium ? 'premium' : 'regular'} user!`);
});
app.listen(3000, () => {
console.log('Server is running on port 3000');
});
This example demonstrates how to provide tiered rate limiting based on user status.
Best Practices for Rate Limiting
- Communicate Limits Clearly: Use headers to indicate limits, remaining requests, and reset times
- Be Generous: Start with liberal limits and tighten as needed
- Use Sliding Windows: Reset counters gradually rather than all at once
- Store Rate Limit Data Externally: For multi-server setups, use Redis or a similar solution
- Consider User Experience: Provide clear error messages when limits are hit
- Monitor and Adjust: Regularly review rate limit effectiveness
Adding Rate Limit Headers
To make your API more user-friendly, include rate limit information in response headers:
const express = require('express');
const rateLimit = require('express-rate-limit');
const app = express();
// Rate limiter with informative headers
const limiter = rateLimit({
windowMs: 60 * 1000, // 1 minute
max: 10,
standardHeaders: true, // Return rate limit info in the `RateLimit-*` headers
legacyHeaders: false, // Disable the `X-RateLimit-*` headers
});
app.use(limiter);
app.get('/', (req, res) => {
res.send('Check your headers for rate limit information!');
});
app.listen(3000, () => {
console.log('Server is running on port 3000');
});
With this setup, each response will include headers like:
RateLimit-Limit
: Maximum allowed requestsRateLimit-Remaining
: Number of remaining requestsRateLimit-Reset
: Time when the limit resets
Real-World Example: API with Multiple Tiers
Let's create a more comprehensive example of an API with different rate limiting tiers:
const express = require('express');
const rateLimit = require('express-rate-limit');
const app = express();
// Parse JSON bodies
app.use(express.json());
// Mock user database with different plan levels
const users = {
'api-key-free': { plan: 'free', requestLimit: 100 },
'api-key-basic': { plan: 'basic', requestLimit: 500 },
'api-key-premium': { plan: 'premium', requestLimit: 1000 }
};
// Authentication middleware
const authenticate = (req, res, next) => {
const apiKey = req.headers['x-api-key'];
if (!apiKey || !users[apiKey]) {
return res.status(401).json({ error: 'Invalid API key' });
}
req.user = users[apiKey];
next();
};
// Tiered rate limiting middleware
const tieredRateLimit = rateLimit({
windowMs: 60 * 60 * 1000, // 1 hour
max: (req) => {
return req.user ? req.user.requestLimit : 30; // Default limit for unauthenticated users
},
keyGenerator: (req) => {
return req.headers['x-api-key'] || req.ip;
},
standardHeaders: true,
message: (req) => {
const plan = req.user ? req.user.plan : 'unauthenticated';
return `Rate limit exceeded for ${plan} tier. Please upgrade your plan or try again later.`;
}
});
// Public endpoints with lower limits
app.get('/public', (req, res) => {
res.json({ message: 'This is a public endpoint with minimal rate limiting' });
});
// Protected endpoints with tiered rate limiting
app.use('/api', authenticate, tieredRateLimit);
app.get('/api/data', (req, res) => {
res.json({
message: `Welcome to the API, ${req.user.plan} tier user!`,
data: { items: ['item1', 'item2', 'item3'] }
});
});
app.get('/api/account', (req, res) => {
res.json({
plan: req.user.plan,
requestLimit: req.user.requestLimit,
});
});
// Error handling middleware
app.use((err, req, res, next) => {
console.error(err.stack);
res.status(500).json({ error: 'Something went wrong!' });
});
app.listen(3000, () => {
console.log('Server is running on port 3000');
});
Testing this API:
- Make a request to
/public
- accessible to all users with minimal limits - Make a request to
/api/data
with a free tier API key:curl -H "X-API-Key: api-key-free" http://localhost:3000/api/data
- Make numerous requests with the same key to hit the rate limit:
for i in {1..101}; do curl -H "X-API-Key: api-key-free" http://localhost:3000/api/data; done
- Try using a premium key to verify higher limits:
curl -H "X-API-Key: api-key-premium" http://localhost:3000/api/data
Summary
Rate limiting is an essential component of any production-ready API. It helps protect your infrastructure from abuse, ensures fair usage among clients, and enhances the overall stability of your application.
In this tutorial, we covered:
- Basic rate limiting configuration with
express-rate-limit
- Route-specific rate limiting for different endpoints
- Advanced techniques including Redis storage for distributed environments
- Custom key generation for API key-based limits
- Dynamic rate limiting based on user tiers
- Best practices for implementing rate limits
- A real-world example of a tiered API with comprehensive rate limiting
By implementing proper rate limiting, you ensure that your API can handle traffic efficiently while protecting against malicious users and abuse.
Additional Resources
- express-rate-limit documentation
- Rate limiting patterns
- HTTP 429 Status Code (RFC 6585)
- API Design Best Practices
Exercises
- Implement a rate limiter that allows different limits for GET vs POST requests
- Create a system that gradually increases rate limits for trusted users over time
- Build a simple dashboard that shows current rate limit status for different API keys
- Implement a "burst" feature that allows occasional spikes in usage
- Add a system that notifies administrators when users regularly hit their rate limits
If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)