Skip to main content

Echo Retry Pattern

Introduction

When building applications that interact with external services or databases, you'll inevitably encounter failures. Network connectivity issues, service outages, or temporary system overloads can cause requests to fail. The Echo Retry Pattern provides a structured approach to handling these transient failures by automatically retrying failed operations, giving your application resilience against temporary disruptions.

In this tutorial, you'll learn how to implement retry logic in your Echo applications to create more robust APIs and services that can gracefully recover from common transient failures.

Understanding the Retry Pattern

The retry pattern is a simple but powerful concept:

  1. Attempt an operation (like an API call or database query)
  2. If it fails due to a transient issue, wait for a short time
  3. Try the operation again, up to a maximum number of attempts
  4. If all attempts fail, gracefully handle the error

Key components of an effective retry strategy include:

  • Maximum Retries: The number of times to attempt an operation
  • Backoff Strategy: How long to wait between retries
  • Error Classification: Determining which errors are retryable

Implementing Basic Retry Logic

Let's start with a simple implementation of retry logic in an Echo application:

go
package main

import (
"errors"
"net/http"
"time"

"github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware"
)

// externalAPICall simulates a call to an external service that might fail
func externalAPICall() (string, error) {
// Simulate random failures (30% chance)
if time.Now().UnixNano()%10 < 3 {
return "", errors.New("service unavailable")
}
return "Success response from external API", nil
}

// withRetry wraps a function with retry logic
func withRetry(attempts int, delay time.Duration, fn func() (string, error)) (string, error) {
var err error
var result string

for i := 0; i < attempts; i++ {
result, err = fn()
if err == nil {
return result, nil
}

if i < attempts-1 { // don't sleep after the last attempt
time.Sleep(delay)
// You could use exponential backoff here:
// delay = delay * 2
}
}

return "", err
}

func main() {
e := echo.New()
e.Use(middleware.Logger())

e.GET("/api/data", func(c echo.Context) error {
// Try the operation with up to 3 retries, waiting 1 second between attempts
result, err := withRetry(3, 1*time.Second, externalAPICall)

if err != nil {
return c.JSON(http.StatusServiceUnavailable, map[string]string{
"error": "Service unavailable after multiple attempts",
})
}

return c.JSON(http.StatusOK, map[string]string{
"data": result,
})
})

e.Start(":8080")
}

Example Input/Output

Request:

GET /api/data

Successful Output (after potential retries):

json
{
"data": "Success response from external API"
}

Failed Output (after all retries fail):

json
{
"error": "Service unavailable after multiple attempts"
}

Advanced Retry Strategies

Exponential Backoff

A simple fixed delay between retries isn't always optimal. Exponential backoff increases the delay between successive retry attempts, which can help prevent overwhelming a struggling service:

go
func withExponentialBackoff(maxAttempts int, initialDelay time.Duration, fn func() (string, error)) (string, error) {
var err error
var result string
currentDelay := initialDelay

for i := 0; i < maxAttempts; i++ {
result, err = fn()
if err == nil {
return result, nil
}

if i < maxAttempts-1 {
time.Sleep(currentDelay)
// Exponential backoff with jitter to prevent synchronized retries
jitter := time.Duration(rand.Int63n(int64(currentDelay / 2)))
currentDelay = (currentDelay * 2) + jitter
}
}

return "", err
}

Categorizing Retryable Errors

Not all errors should trigger retries. For example, validation errors (400 Bad Request) shouldn't be retried as they won't succeed on subsequent attempts:

go
func isRetryable(err error) bool {
// This is a simplified example. In real applications,
// you might check HTTP status codes, error types, or message contents

if err == nil {
return false
}

// Check for specific error types that are worth retrying
if errors.Is(err, io.ErrUnexpectedEOF) ||
errors.Is(err, io.EOF) ||
strings.Contains(err.Error(), "connection reset") ||
strings.Contains(err.Error(), "service unavailable") {
return true
}

return false
}

func withSmartRetry(attempts int, delay time.Duration, fn func() (string, error)) (string, error) {
var err error
var result string

for i := 0; i < attempts; i++ {
result, err = fn()
if err == nil {
return result, nil
}

// Only retry if the error is retryable
if !isRetryable(err) {
return "", err
}

if i < attempts-1 {
time.Sleep(delay)
delay = delay * 2 // Exponential backoff
}
}

return "", err
}

Real-World Application: Resilient External API Client

Let's implement a more complete example of the retry pattern with a service that calls an external API:

go
package main

import (
"context"
"errors"
"math/rand"
"net/http"
"time"

"github.com/labstack/echo/v4"
)

// WeatherService represents a client for a weather API
type WeatherService struct {
baseURL string
httpClient *http.Client
maxRetries int
baseDelay time.Duration
}

// NewWeatherService creates a new WeatherService client
func NewWeatherService(baseURL string) *WeatherService {
return &WeatherService{
baseURL: baseURL,
httpClient: &http.Client{
Timeout: 5 * time.Second,
},
maxRetries: 3,
baseDelay: 500 * time.Millisecond,
}
}

// ErrTemporaryFailure represents a transient error
var ErrTemporaryFailure = errors.New("temporary failure")

// GetWeather fetches weather data for a location with retry capability
func (ws *WeatherService) GetWeather(ctx context.Context, location string) (string, error) {
var lastErr error

for attempt := 0; attempt < ws.maxRetries; attempt++ {
// In a real app, this would make an HTTP request to the weather API
// For demonstration, we'll simulate occasional failures
if rand.Intn(10) < 3 && attempt < ws.maxRetries-1 {
lastErr = ErrTemporaryFailure
delay := ws.baseDelay * time.Duration(1<<attempt) // Exponential backoff

// Add jitter to prevent thundering herd problem
jitter := time.Duration(rand.Int63n(int64(delay / 4)))
delay = delay + jitter

select {
case <-ctx.Done():
return "", ctx.Err()
case <-time.After(delay):
continue
}
}

// Success case
return "Sunny, 25°C in " + location, nil
}

return "", errors.New("failed to fetch weather data after multiple attempts: " + lastErr.Error())
}

func main() {
e := echo.New()
weatherService := NewWeatherService("https://api.weather-example.com")

e.GET("/weather/:location", func(c echo.Context) error {
location := c.Param("location")

// Create a context that can be canceled when the request is done
ctx, cancel := context.WithTimeout(c.Request().Context(), 10*time.Second)
defer cancel()

weather, err := weatherService.GetWeather(ctx, location)
if err != nil {
if errors.Is(err, context.DeadlineExceeded) {
return c.JSON(http.StatusGatewayTimeout, map[string]string{
"error": "Request timed out",
})
}
return c.JSON(http.StatusServiceUnavailable, map[string]string{
"error": err.Error(),
})
}

return c.JSON(http.StatusOK, map[string]string{
"location": location,
"forecast": weather,
})
})

e.Start(":8080")
}

Input/Output Example

Request:

GET /weather/London

Successful Output (after potential retries):

json
{
"location": "London",
"forecast": "Sunny, 25°C in London"
}

Best Practices for the Echo Retry Pattern

  1. Set Appropriate Timeouts: Always set timeouts for both individual requests and the entire operation to prevent indefinite hanging.

  2. Use Exponential Backoff with Jitter: This prevents overwhelming services and avoids the "thundering herd" problem where many clients retry simultaneously.

  3. Define Clear Retry Policies:

    • Set reasonable maximum retry attempts (typically 3-5)
    • Only retry idempotent operations (GET, PUT, DELETE)
    • Only retry for specific error codes that indicate transient failures
  4. Implement Circuit Breaker: Consider combining the retry pattern with a circuit breaker to prevent repeated calls to a failing service. (This is a topic for another tutorial!)

  5. Log Retry Attempts: Log when operations are being retried to help with debugging and monitoring.

  6. Respect Retry-After Headers: If the external service provides a Retry-After header, honor it to implement a server-guided backoff strategy.

Implementing a Reusable Retry Middleware

For a more systematic approach, you can implement a retry middleware in Echo:

go
// RetryConfig holds configuration for the retry middleware
type RetryConfig struct {
MaxAttempts int
InitialInterval time.Duration
MaxInterval time.Duration
RetryableStatus []int
RetryableMethods []string
}

// RetryMiddleware creates a middleware that retries requests
func RetryMiddleware(config RetryConfig) echo.MiddlewareFunc {
if config.MaxAttempts <= 0 {
config.MaxAttempts = 3
}
if config.InitialInterval <= 0 {
config.InitialInterval = 100 * time.Millisecond
}
if config.MaxInterval <= 0 {
config.MaxInterval = 2 * time.Second
}
if len(config.RetryableStatus) == 0 {
config.RetryableStatus = []int{
http.StatusInternalServerError,
http.StatusBadGateway,
http.StatusServiceUnavailable,
http.StatusGatewayTimeout,
}
}
if len(config.RetryableMethods) == 0 {
config.RetryableMethods = []string{
http.MethodGet,
http.MethodPut,
http.MethodDelete,
http.MethodHead,
}
}

return func(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
// Check if method should be retried
method := c.Request().Method
shouldRetry := false
for _, m := range config.RetryableMethods {
if m == method {
shouldRetry = true
break
}
}

if !shouldRetry {
return next(c)
}

// Implement retry logic
var err error
var res *echo.Response

for attempt := 0; attempt < config.MaxAttempts; attempt++ {
// Create new request context for each retry
req := c.Request().Clone(c.Request().Context())
c.SetRequest(req)

err = next(c)
res = c.Response()

// Check if we should retry based on status code
statusCode := res.Status
if statusCode < 500 || attempt >= config.MaxAttempts-1 {
break
}

// Check if status code is in retryable list
retryable := false
for _, status := range config.RetryableStatus {
if status == statusCode {
retryable = true
break
}
}

if !retryable {
break
}

// Calculate backoff with exponential increase and jitter
backoff := config.InitialInterval * time.Duration(1<<attempt)
if backoff > config.MaxInterval {
backoff = config.MaxInterval
}
jitter := time.Duration(rand.Int63n(int64(backoff / 4)))
sleepTime := backoff + jitter

time.Sleep(sleepTime)
}

return err
}
}
}

You can use this middleware for specific routes or globally:

go
func main() {
e := echo.New()

// Apply retry middleware to all routes
e.Use(RetryMiddleware(RetryConfig{
MaxAttempts: 3,
InitialInterval: 200 * time.Millisecond,
}))

// Your routes...

e.Start(":8080")
}

Summary

The Echo Retry Pattern is an essential resilience pattern for building robust applications that can withstand transient failures. By implementing proper retry logic, your services can gracefully handle temporary network issues, service unavailability, and other common failures.

Key takeaways from this tutorial:

  1. Retry logic should be used for operations that might experience transient failures
  2. Implement exponential backoff with jitter to avoid overwhelming recovering services
  3. Only retry for appropriate error types and idempotent operations
  4. Set proper timeouts and maximum retry attempts
  5. Consider implementing the pattern as middleware for reuse across your application

By applying these principles, you'll create more resilient Echo applications that provide a better experience for your users, even when underlying services experience temporary disruptions.

Additional Resources

Exercises

  1. Implement a retry middleware that respects the Retry-After header from external services
  2. Combine the retry pattern with a circuit breaker to prevent repeated calls to a failing service
  3. Create a configurable retry policy that can be applied to different routes with different settings
  4. Implement proper logging for retry attempts to help with debugging and monitoring
  5. Build a sample application that uses the retry pattern to interact with a flaky external API


If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)