Skip to main content

Echo Rate Limiting

Introduction

Rate limiting is an essential security feature for any web application or API. It restricts the number of requests a user can make to your server within a specific time period. In the context of the Echo framework, implementing rate limiting helps you:

  • Prevent abuse of your API endpoints
  • Protect against brute force attacks
  • Ensure fair resource usage across all users
  • Reduce the risk of DoS (Denial of Service) attacks
  • Maintain system stability under high load

In this tutorial, we'll explore how to implement rate limiting in your Echo applications, different strategies for rate limiting, and best practices for deployment.

Understanding Rate Limiting Concepts

Before diving into implementation, let's understand some key concepts:

What is Rate Limiting?

Rate limiting restricts how many requests a client can make to your API within a defined timeframe. For example, you might limit users to 100 requests per minute.

Rate Limiting Algorithms

Several algorithms can be used for rate limiting:

  1. Fixed Window - Counts requests in fixed time intervals (e.g., per minute)
  2. Sliding Window - More precise tracking that gradually expires old requests
  3. Token Bucket - Users have a "bucket" of tokens that refill at a fixed rate
  4. Leaky Bucket - Requests are processed at a constant rate, regardless of incoming traffic

Implementing Rate Limiting in Echo

Echo doesn't have built-in rate limiting middleware, but we can easily add it using third-party libraries or implement our own. Let's explore both approaches.

Using the golang.org/x/time/rate Package

The Go standard library offers a rate limiting package that we can integrate with Echo:

go
package main

import (
"net/http"
"sync"
"time"

"github.com/labstack/echo/v4"
"golang.org/x/time/rate"
)

// IPRateLimiter stores rate limiters for different IP addresses
type IPRateLimiter struct {
ips map[string]*rate.Limiter
mu *sync.RWMutex
rate rate.Limit
bucket int
}

// NewIPRateLimiter creates a new rate limiter for IP addresses
func NewIPRateLimiter(r rate.Limit, b int) *IPRateLimiter {
return &IPRateLimiter{
ips: make(map[string]*rate.Limiter),
mu: &sync.RWMutex{},
rate: r,
bucket: b,
}
}

// AddIP creates a new rate limiter for an IP address if it doesn't exist
func (i *IPRateLimiter) AddIP(ip string) *rate.Limiter {
i.mu.Lock()
defer i.mu.Unlock()

limiter := rate.NewLimiter(i.rate, i.bucket)
i.ips[ip] = limiter
return limiter
}

// GetLimiter returns the rate limiter for the given IP address
func (i *IPRateLimiter) GetLimiter(ip string) *rate.Limiter {
i.mu.RLock()
limiter, exists := i.ips[ip]
i.mu.RUnlock()

if !exists {
return i.AddIP(ip)
}
return limiter
}

func main() {
e := echo.New()

// Create a rate limiter that allows 5 requests per second with a burst of 10
limiter := NewIPRateLimiter(5, 10)

// Rate limiting middleware
e.Use(func(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
ip := c.RealIP()
limiter := limiter.GetLimiter(ip)
if !limiter.Allow() {
return c.JSON(http.StatusTooManyRequests, map[string]string{
"message": "Rate limit exceeded",
})
}
return next(c)
}
})

e.GET("/", func(c echo.Context) error {
return c.String(http.StatusOK, "Hello, World!")
})

e.Logger.Fatal(e.Start(":8080"))
}

In this example:

  1. We create an IPRateLimiter struct to manage rate limiters for different IP addresses
  2. Each IP gets its own rate limiter, allowing 5 requests per second with a burst of 10
  3. The middleware checks if a request is allowed before proceeding
  4. If the rate limit is exceeded, it returns a 429 Too Many Requests status

Using Echo Middleware Pattern

We can also create a more modular and configurable rate limiting middleware:

go
package middleware

import (
"net/http"
"sync"
"time"

"github.com/labstack/echo/v4"
"golang.org/x/time/rate"
)

// RateLimiterConfig defines the config for RateLimiter middleware
type RateLimiterConfig struct {
// Limit defines the maximum frequency of requests.
// Default: 10 requests per second
Limit rate.Limit

// Burst defines the maximum burst size
// Default: 30
Burst int

// ExpiresIn defines how long to keep limiters in memory
// Default: 3 minutes
ExpiresIn time.Duration

// LimitReachedHandler is called when rate limit is exceeded
// Default: returns 429 with "Too Many Requests" message
LimitReachedHandler func(c echo.Context) error

// IdentifierExtractor extracts the identifier for limiting from request
// Default: uses IP address
IdentifierExtractor func(c echo.Context) (string, error)
}

type limiterClient struct {
limiter *rate.Limiter
lastSeen time.Time
}

// RateLimiter returns a middleware that limits request frequency
func RateLimiter(config RateLimiterConfig) echo.MiddlewareFunc {
// Set default config
if config.Limit == 0 {
config.Limit = rate.Limit(10)
}
if config.Burst == 0 {
config.Burst = 30
}
if config.ExpiresIn == 0 {
config.ExpiresIn = 3 * time.Minute
}
if config.LimitReachedHandler == nil {
config.LimitReachedHandler = func(c echo.Context) error {
return c.JSON(http.StatusTooManyRequests, map[string]string{
"message": "Too Many Requests",
})
}
}
if config.IdentifierExtractor == nil {
config.IdentifierExtractor = func(c echo.Context) (string, error) {
return c.RealIP(), nil
}
}

// Create clients map
clients := make(map[string]*limiterClient)
mu := &sync.RWMutex{}

// Start cleanup routine
go func() {
for {
time.Sleep(time.Minute)
mu.Lock()
for id, client := range clients {
if time.Since(client.lastSeen) > config.ExpiresIn {
delete(clients, id)
}
}
mu.Unlock()
}
}()

return func(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
identifier, err := config.IdentifierExtractor(c)
if err != nil {
return err
}

mu.RLock()
client, exists := clients[identifier]
mu.RUnlock()

if !exists {
client = &limiterClient{
limiter: rate.NewLimiter(config.Limit, config.Burst),
lastSeen: time.Now(),
}
mu.Lock()
clients[identifier] = client
mu.Unlock()
} else {
client.lastSeen = time.Now()
}

if !client.limiter.Allow() {
return config.LimitReachedHandler(c)
}

return next(c)
}
}
}

Then, you can use it in your main application:

go
package main

import (
"github.com/labstack/echo/v4"
"golang.org/x/time/rate"
"yourproject/middleware"
)

func main() {
e := echo.New()

// Add rate limiter middleware with custom configuration
e.Use(middleware.RateLimiter(middleware.RateLimiterConfig{
Limit: rate.Limit(5), // 5 requests per second
Burst: 10, // Allow bursts of up to 10 requests
}))

e.GET("/", func(c echo.Context) error {
return c.String(200, "Hello, World!")
})

e.Logger.Fatal(e.Start(":8080"))
}

Advanced Rate Limiting Strategies

Different Limits for Different Endpoints

You might want to apply different rate limits to different endpoints. For example, login endpoints might need stricter limits than public information endpoints:

go
// Create multiple rate limiters
authLimiter := middleware.RateLimiter(middleware.RateLimiterConfig{
Limit: rate.Limit(1), // 1 request per second
Burst: 5, // Burst of 5
})

apiLimiter := middleware.RateLimiter(middleware.RateLimiterConfig{
Limit: rate.Limit(10), // 10 requests per second
Burst: 20, // Burst of 20
})

// Apply them to different routes
e.POST("/login", loginHandler, authLimiter)
e.POST("/register", registerHandler, authLimiter)

// API group with its own limiter
api := e.Group("/api", apiLimiter)
api.GET("/users", getUsersHandler)
api.GET("/products", getProductsHandler)

User-Based Rate Limiting

For authenticated endpoints, you might want to limit based on user ID rather than IP address:

go
userLimiter := middleware.RateLimiter(middleware.RateLimiterConfig{
Limit: rate.Limit(20),
Burst: 50,
IdentifierExtractor: func(c echo.Context) (string, error) {
// Get user ID from JWT token or session
user := c.Get("user").(*YourUserType)
return user.ID, nil
},
})

// Protected routes with user-based rate limiting
protected := e.Group("/protected", authMiddleware, userLimiter)
protected.GET("/dashboard", dashboardHandler)

Rate Limiting with Redis

For distributed systems with multiple servers, you need a shared storage for rate limiting counters. Redis is perfect for this:

go
package middleware

import (
"context"
"net/http"
"time"

"github.com/go-redis/redis/v8"
"github.com/labstack/echo/v4"
)

// RedisRateLimiter implements rate limiting with Redis
func RedisRateLimiter(redisClient *redis.Client, requests int, duration time.Duration) echo.MiddlewareFunc {
return func(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
// Get identifier (e.g., IP address)
ip := c.RealIP()
key := "ratelimit:" + ip

ctx := context.Background()

// Increment the counter for this IP
count, err := redisClient.Incr(ctx, key).Result()
if err != nil {
// If Redis fails, allow the request but log the error
c.Logger().Error("Redis error:", err)
return next(c)
}

// Set expiration on first request
if count == 1 {
redisClient.Expire(ctx, key, duration)
}

// Set headers with rate limit info
c.Response().Header().Set("X-RateLimit-Limit", fmt.Sprintf("%d", requests))
c.Response().Header().Set("X-RateLimit-Remaining", fmt.Sprintf("%d", requests-int(count)))

// Check if the limit is exceeded
if count > int64(requests) {
ttl, err := redisClient.TTL(ctx, key).Result()
if err != nil {
ttl = 0
}

c.Response().Header().Set("X-RateLimit-Reset", fmt.Sprintf("%d", time.Now().Add(ttl).Unix()))
return c.JSON(http.StatusTooManyRequests, map[string]string{
"message": "Rate limit exceeded",
"retry_after": fmt.Sprintf("%.0f", ttl.Seconds()),
})
}

return next(c)
}
}
}

Usage:

go
import (
"github.com/go-redis/redis/v8"
"time"
)

func main() {
e := echo.New()

// Connect to Redis
redisClient := redis.NewClient(&redis.Options{
Addr: "localhost:6379",
})

// Rate limit: 100 requests per minute
e.Use(middleware.RedisRateLimiter(redisClient, 100, time.Minute))

// Routes
e.GET("/", func(c echo.Context) error {
return c.String(http.StatusOK, "Hello, World!")
})

e.Logger.Fatal(e.Start(":8080"))
}

Best Practices for Rate Limiting

Communicate Limits to Clients

Good API design includes communicating rate limits to clients through HTTP headers:

go
func RateLimitMiddleware() echo.MiddlewareFunc {
// ... limiter setup ...

return func(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
// ... rate limiting logic ...

// Set rate limit headers
c.Response().Header().Set("X-RateLimit-Limit", "100")
c.Response().Header().Set("X-RateLimit-Remaining", "95")
c.Response().Header().Set("X-RateLimit-Reset", "1616161616")

return next(c)
}
}
}

Graceful Rate Limit Responses

When clients hit the rate limit, provide helpful information:

go
return c.JSON(http.StatusTooManyRequests, map[string]interface{}{
"error": "Rate limit exceeded",
"message": "Please slow down your request rate",
"retry_after": 60, // Seconds until the limit resets
"documentation_url": "https://api.example.com/docs/rate-limiting"
})

Implement Tiered Rate Limiting

Consider implementing different tiers of rate limiting based on user types:

go
func getTierLimit(c echo.Context) (int, int) {
user := c.Get("user").(*User)

switch user.Tier {
case "premium":
return 1000, 50 // 1000 requests with burst of 50
case "standard":
return 100, 10 // 100 requests with burst of 10
default:
return 20, 5 // 20 requests with burst of 5
}
}

Monitoring and Debugging Rate Limiting

When implementing rate limiting, it's important to monitor its effectiveness and troubleshoot issues:

go
func RateLimitMiddleware() echo.MiddlewareFunc {
// ... limiter setup ...

return func(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
// ... rate limiting logic ...

// Log rate limit events
if !allowed {
c.Logger().Warnf("Rate limit exceeded for IP %s on path %s",
c.RealIP(), c.Request().URL.Path)
}

// Continue with the request if allowed
if allowed {
return next(c)
}

// Return 429 Too Many Requests
return c.JSON(http.StatusTooManyRequests, map[string]string{
"error": "Rate limit exceeded",
})
}
}
}

Summary

Rate limiting is a crucial security measure for any web application or API. In this tutorial, we've covered:

  1. Basic concepts of rate limiting and its importance
  2. Different rate limiting algorithms and strategies
  3. Implementing IP-based rate limiting in Echo
  4. Creating flexible and configurable rate limiting middleware
  5. Advanced strategies like user-based and Redis-backed rate limiting
  6. Best practices for communicating limits to clients

By implementing rate limiting in your Echo applications, you can protect your services from abuse, ensure fair resource distribution, and maintain stability under load.

Additional Resources

Exercises

  1. Implement a rate limiter that applies different limits based on the HTTP method (stricter for POST/PUT than for GET)
  2. Create a "smart" rate limiter that detects suspicious patterns (e.g., too many failed login attempts) and applies stricter limits
  3. Extend the Redis-based rate limiter to implement a sliding window algorithm
  4. Implement a whitelist feature for your rate limiter to exclude certain IPs or users
  5. Add detailed metrics to your rate limiter to track when and how often limits are hit


If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)