Skip to main content

Echo Benchmarking

When developing web applications with Echo, it's essential to understand how your application performs under various conditions. Benchmarking helps you measure your application's performance characteristics, identify bottlenecks, and make data-driven optimization decisions. This guide will walk you through the process of benchmarking Echo applications.

Introduction to Benchmarking

Benchmarking is the practice of measuring the performance of your application in a controlled environment. For web applications built with Echo, this typically involves:

  1. Measuring response times
  2. Calculating requests per second (throughput)
  3. Analyzing resource usage (CPU, memory)
  4. Evaluating concurrency handling

Proper benchmarking allows you to:

  • Establish performance baselines
  • Compare different implementation approaches
  • Identify performance regressions
  • Set realistic performance goals

Benchmarking Tools

Go's Built-in Testing Package

Go provides built-in benchmarking capabilities through its testing package. Here's how you can benchmark an Echo handler:

go
package handlers

import (
"net/http"
"net/http/httptest"
"testing"

"github.com/labstack/echo/v4"
)

func HelloHandler(c echo.Context) error {
return c.String(http.StatusOK, "Hello, World!")
}

func BenchmarkHelloHandler(b *testing.B) {
// Setup
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/", nil)
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)

// Reset timer before the loop
b.ResetTimer()

// Run the benchmark
for i := 0; i < b.N; i++ {
HelloHandler(c)
}
}

Run the benchmark with:

bash
go test -bench=. -benchmem

Example output:

BenchmarkHelloHandler-8      1000000              1140 ns/op             568 B/op          9 allocs/op

This output indicates:

  • The handler was executed 1,000,000 times
  • Each execution took approximately 1140 nanoseconds
  • Each execution allocated around 568 bytes
  • Each execution made 9 heap allocations

Apache Benchmark (ab)

Apache Benchmark is a popular command-line tool for benchmarking HTTP servers:

bash
ab -n 10000 -c 100 http://localhost:1323/

This command sends 10,000 requests with a concurrency level of 100 to your Echo server.

Example output:

This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
...

Server Software:
Server Hostname: localhost
Server Port: 1323

Document Path: /
Document Length: 13 bytes

Concurrency Level: 100
Time taken for tests: 0.598 seconds
Complete requests: 10000
Failed requests: 0
Total transferred: 1460000 bytes
HTML transferred: 130000 bytes
Requests per second: 16711.28 [#/sec] (mean)
Time per request: 5.984 [ms] (mean)
Time per request: 0.060 [ms] (mean, across all concurrent requests)
Transfer rate: 2381.92 [Kbytes/sec] received

Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 2 0.6 2 5
Processing: 1 4 0.8 4 8
Waiting: 0 3 0.8 3 7
Total: 2 6 0.8 6 9

Percentage of the requests served within a certain time (ms)
50% 6
66% 6
75% 6
80% 6
90% 7
95% 7
98% 8
99% 8
100% 9 (longest request)

Hey

Hey is a modern HTTP load generator that supports HTTP/2:

bash
hey -n 10000 -c 100 http://localhost:1323/

wrk

wrk is a modern HTTP benchmarking tool capable of generating significant load:

bash
wrk -t12 -c400 -d30s http://localhost:1323/

This runs a benchmark for 30 seconds, using 12 threads, and keeping 400 HTTP connections open.

Creating a Benchmarkable Echo Application

Let's create a simple Echo application with different handler implementations to benchmark:

go
package main

import (
"net/http"
"strconv"
"github.com/labstack/echo/v4"
)

func main() {
e := echo.New()

// Simple string response
e.GET("/simple", func(c echo.Context) error {
return c.String(http.StatusOK, "Hello, World!")
})

// JSON response
e.GET("/json", func(c echo.Context) error {
return c.JSON(http.StatusOK, map[string]string{
"message": "Hello, World!",
})
})

// Response with processing
e.GET("/compute/:n", func(c echo.Context) error {
n, err := strconv.Atoi(c.Param("n"))
if err != nil {
return c.String(http.StatusBadRequest, "Invalid number")
}

// Simulate computation
result := fibonacci(n)
return c.String(http.StatusOK, strconv.Itoa(result))
})

e.Logger.Fatal(e.Start(":1323"))
}

// Fibonacci calculation to simulate work
func fibonacci(n int) int {
if n <= 1 {
return n
}
return fibonacci(n-1) + fibonacci(n-2)
}

Benchmarking Different Endpoints

Let's benchmark each endpoint to compare performance:

Simple String Response

bash
hey -n 10000 -c 100 http://localhost:1323/simple

JSON Response

bash
hey -n 10000 -c 100 http://localhost:1323/json

Computation Endpoint

bash
hey -n 1000 -c 10 http://localhost:1323/compute/20

Note: We're using fewer requests for the compute endpoint since it's more CPU-intensive.

Analyzing Benchmark Results

When analyzing benchmark results, pay attention to:

  1. Requests per second (RPS): Higher is better, indicates throughput
  2. Latency: Lower is better, especially p95 and p99 percentiles
  3. Error rate: Should be as close to 0% as possible
  4. Resource utilization: CPU, memory, network I/O

Interpreting Common Issues

IssuePossible CausePotential Solution
Low RPSCPU bottleneck, inefficient codeProfile code, optimize algorithms
High latencyBlocking operations, resource contentionUse goroutines, implement caching
High error rateResource exhaustion, bugsIncrease timeouts, fix code issues
Memory growthMemory leaks, excessive allocationsUse pprof to identify leaks

Best Practices for Echo Benchmarking

  1. Consistent environment: Always benchmark in the same environment for fair comparisons

  2. Warm-up period: Allow the application to warm up before recording results

  3. Multiple runs: Take the average of multiple benchmark runs

  4. Realistic scenarios: Benchmark with request patterns that reflect real-world usage

  5. Measure what matters: Focus on metrics that impact your users

  6. Isolate components: Benchmark individual parts of your application separately

  7. Test concurrency limits: Gradually increase concurrency until performance degrades

Practical Example: Middleware Benchmarking

Middleware can significantly impact your application's performance. Let's benchmark an Echo application with and without middleware:

go
package main

import (
"net/http"
"time"

"github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware"
)

func main() {
// With minimal middleware
e1 := echo.New()
e1.Use(middleware.Recover())
e1.GET("/minimal", func(c echo.Context) error {
return c.String(http.StatusOK, "Hello, World!")
})

// With multiple middleware
e2 := echo.New()
e2.Use(middleware.Logger())
e2.Use(middleware.Recover())
e2.Use(middleware.CORS())
e2.Use(middleware.Gzip())
e2.Use(middleware.RequestID())
e2.Use(customMiddleware)
e2.GET("/full", func(c echo.Context) error {
return c.String(http.StatusOK, "Hello, World!")
})

// Start servers on different ports
go e1.Start(":1323")
e2.Start(":1324")
}

func customMiddleware(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
// Simulate some middleware work
time.Sleep(100 * time.Microsecond)
return next(c)
}
}

Benchmark both endpoints:

bash
hey -n 10000 -c 100 http://localhost:1323/minimal
hey -n 10000 -c 100 http://localhost:1324/full

Optimizing Based on Benchmark Results

After identifying bottlenecks through benchmarking, here are some optimization techniques:

  1. Route-specific middleware: Apply middleware only where needed

    go
    // Only for specific routes that need it
    apiGroup := e.Group("/api")
    apiGroup.Use(middleware.JWT([]byte("secret")))
  2. Efficient JSON handling: Use custom marshaling for hot paths

    go
    type Response struct {
    Message string `json:"message"`
    }

    // Pre-allocate common responses
    var helloResponse = &Response{Message: "Hello, World!"}

    e.GET("/optimized-json", func(c echo.Context) error {
    return c.JSON(http.StatusOK, helloResponse)
    })
  3. Connection pooling: Reuse database connections

    go
    db, err := sql.Open("postgres", connStr)
    if err != nil {
    log.Fatal(err)
    }
    // Set connection pool limits
    db.SetMaxIdleConns(10)
    db.SetMaxOpenConns(100)
    db.SetConnMaxLifetime(time.Hour)
  4. Caching: Implement response caching for frequently accessed data

    go
    var (
    cache = make(map[string][]byte)
    mu sync.RWMutex
    )

    e.GET("/cached", func(c echo.Context) error {
    key := "greeting"

    // Try to get from cache
    mu.RLock()
    data, found := cache[key]
    mu.RUnlock()

    if found {
    return c.JSONBlob(http.StatusOK, data)
    }

    // Generate response
    response := map[string]string{"message": "Hello, World!"}
    jsonData, _ := json.Marshal(response)

    // Store in cache
    mu.Lock()
    cache[key] = jsonData
    mu.Unlock()

    return c.JSONBlob(http.StatusOK, jsonData)
    })

Continuous Performance Testing

Integrate benchmarking into your CI/CD pipeline to catch performance regressions early:

  1. Run benchmarks against each PR
  2. Compare results against the baseline
  3. Fail the build if performance degrades beyond a threshold
  4. Store historical benchmark data for trend analysis

Summary

Benchmarking is an essential practice for developing high-performance Echo applications. By systematically measuring your application's performance characteristics, you can identify bottlenecks, make informed optimization decisions, and ensure your application meets its performance requirements.

Remember that premature optimization can lead to unnecessary complexity, so always benchmark first to determine where optimization efforts should be focused. With the tools and techniques covered in this guide, you're now equipped to effectively benchmark and optimize your Echo applications.

Additional Resources

Exercises

  1. Create a simple Echo application and benchmark it with different concurrency levels (10, 50, 100, 200) to determine its scaling characteristics.

  2. Implement two versions of the same endpoint—one with and one without response caching—and compare their performance.

  3. Profile an Echo application handling database queries to identify and optimize the slowest operations.

  4. Experiment with different middleware configurations and measure their impact on overall application performance.

  5. Create a benchmark suite that compares the performance of different JSON serialization approaches in Echo.



If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)