Echo Benchmarking
When developing web applications with Echo, it's essential to understand how your application performs under various conditions. Benchmarking helps you measure your application's performance characteristics, identify bottlenecks, and make data-driven optimization decisions. This guide will walk you through the process of benchmarking Echo applications.
Introduction to Benchmarking
Benchmarking is the practice of measuring the performance of your application in a controlled environment. For web applications built with Echo, this typically involves:
- Measuring response times
- Calculating requests per second (throughput)
- Analyzing resource usage (CPU, memory)
- Evaluating concurrency handling
Proper benchmarking allows you to:
- Establish performance baselines
- Compare different implementation approaches
- Identify performance regressions
- Set realistic performance goals
Benchmarking Tools
Go's Built-in Testing Package
Go provides built-in benchmarking capabilities through its testing package. Here's how you can benchmark an Echo handler:
package handlers
import (
"net/http"
"net/http/httptest"
"testing"
"github.com/labstack/echo/v4"
)
func HelloHandler(c echo.Context) error {
return c.String(http.StatusOK, "Hello, World!")
}
func BenchmarkHelloHandler(b *testing.B) {
// Setup
e := echo.New()
req := httptest.NewRequest(http.MethodGet, "/", nil)
rec := httptest.NewRecorder()
c := e.NewContext(req, rec)
// Reset timer before the loop
b.ResetTimer()
// Run the benchmark
for i := 0; i < b.N; i++ {
HelloHandler(c)
}
}
Run the benchmark with:
go test -bench=. -benchmem
Example output:
BenchmarkHelloHandler-8 1000000 1140 ns/op 568 B/op 9 allocs/op
This output indicates:
- The handler was executed 1,000,000 times
- Each execution took approximately 1140 nanoseconds
- Each execution allocated around 568 bytes
- Each execution made 9 heap allocations
Apache Benchmark (ab)
Apache Benchmark is a popular command-line tool for benchmarking HTTP servers:
ab -n 10000 -c 100 http://localhost:1323/
This command sends 10,000 requests with a concurrency level of 100 to your Echo server.
Example output:
This is ApacheBench, Version 2.3 <$Revision: 1843412 $>
...
Server Software:
Server Hostname: localhost
Server Port: 1323
Document Path: /
Document Length: 13 bytes
Concurrency Level: 100
Time taken for tests: 0.598 seconds
Complete requests: 10000
Failed requests: 0
Total transferred: 1460000 bytes
HTML transferred: 130000 bytes
Requests per second: 16711.28 [#/sec] (mean)
Time per request: 5.984 [ms] (mean)
Time per request: 0.060 [ms] (mean, across all concurrent requests)
Transfer rate: 2381.92 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 2 0.6 2 5
Processing: 1 4 0.8 4 8
Waiting: 0 3 0.8 3 7
Total: 2 6 0.8 6 9
Percentage of the requests served within a certain time (ms)
50% 6
66% 6
75% 6
80% 6
90% 7
95% 7
98% 8
99% 8
100% 9 (longest request)
Hey
Hey is a modern HTTP load generator that supports HTTP/2:
hey -n 10000 -c 100 http://localhost:1323/
wrk
wrk is a modern HTTP benchmarking tool capable of generating significant load:
wrk -t12 -c400 -d30s http://localhost:1323/
This runs a benchmark for 30 seconds, using 12 threads, and keeping 400 HTTP connections open.
Creating a Benchmarkable Echo Application
Let's create a simple Echo application with different handler implementations to benchmark:
package main
import (
"net/http"
"strconv"
"github.com/labstack/echo/v4"
)
func main() {
e := echo.New()
// Simple string response
e.GET("/simple", func(c echo.Context) error {
return c.String(http.StatusOK, "Hello, World!")
})
// JSON response
e.GET("/json", func(c echo.Context) error {
return c.JSON(http.StatusOK, map[string]string{
"message": "Hello, World!",
})
})
// Response with processing
e.GET("/compute/:n", func(c echo.Context) error {
n, err := strconv.Atoi(c.Param("n"))
if err != nil {
return c.String(http.StatusBadRequest, "Invalid number")
}
// Simulate computation
result := fibonacci(n)
return c.String(http.StatusOK, strconv.Itoa(result))
})
e.Logger.Fatal(e.Start(":1323"))
}
// Fibonacci calculation to simulate work
func fibonacci(n int) int {
if n <= 1 {
return n
}
return fibonacci(n-1) + fibonacci(n-2)
}
Benchmarking Different Endpoints
Let's benchmark each endpoint to compare performance:
Simple String Response
hey -n 10000 -c 100 http://localhost:1323/simple
JSON Response
hey -n 10000 -c 100 http://localhost:1323/json
Computation Endpoint
hey -n 1000 -c 10 http://localhost:1323/compute/20
Note: We're using fewer requests for the compute endpoint since it's more CPU-intensive.
Analyzing Benchmark Results
When analyzing benchmark results, pay attention to:
- Requests per second (RPS): Higher is better, indicates throughput
- Latency: Lower is better, especially p95 and p99 percentiles
- Error rate: Should be as close to 0% as possible
- Resource utilization: CPU, memory, network I/O
Interpreting Common Issues
Issue | Possible Cause | Potential Solution |
---|---|---|
Low RPS | CPU bottleneck, inefficient code | Profile code, optimize algorithms |
High latency | Blocking operations, resource contention | Use goroutines, implement caching |
High error rate | Resource exhaustion, bugs | Increase timeouts, fix code issues |
Memory growth | Memory leaks, excessive allocations | Use pprof to identify leaks |
Best Practices for Echo Benchmarking
-
Consistent environment: Always benchmark in the same environment for fair comparisons
-
Warm-up period: Allow the application to warm up before recording results
-
Multiple runs: Take the average of multiple benchmark runs
-
Realistic scenarios: Benchmark with request patterns that reflect real-world usage
-
Measure what matters: Focus on metrics that impact your users
-
Isolate components: Benchmark individual parts of your application separately
-
Test concurrency limits: Gradually increase concurrency until performance degrades
Practical Example: Middleware Benchmarking
Middleware can significantly impact your application's performance. Let's benchmark an Echo application with and without middleware:
package main
import (
"net/http"
"time"
"github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware"
)
func main() {
// With minimal middleware
e1 := echo.New()
e1.Use(middleware.Recover())
e1.GET("/minimal", func(c echo.Context) error {
return c.String(http.StatusOK, "Hello, World!")
})
// With multiple middleware
e2 := echo.New()
e2.Use(middleware.Logger())
e2.Use(middleware.Recover())
e2.Use(middleware.CORS())
e2.Use(middleware.Gzip())
e2.Use(middleware.RequestID())
e2.Use(customMiddleware)
e2.GET("/full", func(c echo.Context) error {
return c.String(http.StatusOK, "Hello, World!")
})
// Start servers on different ports
go e1.Start(":1323")
e2.Start(":1324")
}
func customMiddleware(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
// Simulate some middleware work
time.Sleep(100 * time.Microsecond)
return next(c)
}
}
Benchmark both endpoints:
hey -n 10000 -c 100 http://localhost:1323/minimal
hey -n 10000 -c 100 http://localhost:1324/full
Optimizing Based on Benchmark Results
After identifying bottlenecks through benchmarking, here are some optimization techniques:
-
Route-specific middleware: Apply middleware only where needed
go// Only for specific routes that need it
apiGroup := e.Group("/api")
apiGroup.Use(middleware.JWT([]byte("secret"))) -
Efficient JSON handling: Use custom marshaling for hot paths
gotype Response struct {
Message string `json:"message"`
}
// Pre-allocate common responses
var helloResponse = &Response{Message: "Hello, World!"}
e.GET("/optimized-json", func(c echo.Context) error {
return c.JSON(http.StatusOK, helloResponse)
}) -
Connection pooling: Reuse database connections
godb, err := sql.Open("postgres", connStr)
if err != nil {
log.Fatal(err)
}
// Set connection pool limits
db.SetMaxIdleConns(10)
db.SetMaxOpenConns(100)
db.SetConnMaxLifetime(time.Hour) -
Caching: Implement response caching for frequently accessed data
govar (
cache = make(map[string][]byte)
mu sync.RWMutex
)
e.GET("/cached", func(c echo.Context) error {
key := "greeting"
// Try to get from cache
mu.RLock()
data, found := cache[key]
mu.RUnlock()
if found {
return c.JSONBlob(http.StatusOK, data)
}
// Generate response
response := map[string]string{"message": "Hello, World!"}
jsonData, _ := json.Marshal(response)
// Store in cache
mu.Lock()
cache[key] = jsonData
mu.Unlock()
return c.JSONBlob(http.StatusOK, jsonData)
})
Continuous Performance Testing
Integrate benchmarking into your CI/CD pipeline to catch performance regressions early:
- Run benchmarks against each PR
- Compare results against the baseline
- Fail the build if performance degrades beyond a threshold
- Store historical benchmark data for trend analysis
Summary
Benchmarking is an essential practice for developing high-performance Echo applications. By systematically measuring your application's performance characteristics, you can identify bottlenecks, make informed optimization decisions, and ensure your application meets its performance requirements.
Remember that premature optimization can lead to unnecessary complexity, so always benchmark first to determine where optimization efforts should be focused. With the tools and techniques covered in this guide, you're now equipped to effectively benchmark and optimize your Echo applications.
Additional Resources
- Echo Performance Guide
- Go Testing and Benchmarking Documentation
- pprof: Go Profiling Tool
- Hey: HTTP Load Generator
- wrk: Modern HTTP Benchmarking Tool
Exercises
-
Create a simple Echo application and benchmark it with different concurrency levels (10, 50, 100, 200) to determine its scaling characteristics.
-
Implement two versions of the same endpoint—one with and one without response caching—and compare their performance.
-
Profile an Echo application handling database queries to identify and optimize the slowest operations.
-
Experiment with different middleware configurations and measure their impact on overall application performance.
-
Create a benchmark suite that compares the performance of different JSON serialization approaches in Echo.
If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)