Skip to main content

Echo Performance Monitoring

Introduction

Performance monitoring is a critical aspect of web application development that helps ensure your Echo applications run efficiently and provide a good user experience. In this guide, we'll explore various techniques and tools for monitoring Echo applications, understanding performance bottlenecks, and implementing solutions to optimize your application's performance.

Performance monitoring involves measuring, collecting, and analyzing metrics about how your application behaves under various conditions. This data-driven approach helps you make informed decisions about optimization efforts rather than relying on guesswork.

Why Monitor Echo Performance?

Before diving into the "how," let's understand the "why":

  • Identify bottlenecks: Discover which parts of your application are slowing down response times
  • Plan capacity: Understand resource needs for expected traffic
  • Detect regressions: Catch performance issues before they affect users
  • Validate optimizations: Confirm that your performance improvements actually work
  • Improve user experience: Ensure your application remains responsive under load

Basic Echo Performance Metrics

Let's start with the fundamental metrics you should monitor in any Echo application:

1. Response Time

Response time measures how long it takes for your server to process a request and return a response.

go
package main

import (
"github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware"
"time"
)

func main() {
e := echo.New()

// Log response time for all requests
e.Use(middleware.LoggerWithConfig(middleware.LoggerConfig{
Format: "method=${method}, uri=${uri}, status=${status}, latency=${latency_human}\n",
}))

e.GET("/", func(c echo.Context) error {
return c.String(200, "Hello, World!")
})

e.Logger.Fatal(e.Start(":1323"))
}

Output:

method=GET, uri=/, status=200, latency=15.2µs

2. Request Rate

The request rate (throughput) measures how many requests your application handles per unit of time.

3. Error Rate

Error rate measures the percentage of requests that result in errors (usually HTTP status codes 4xx or 5xx).

Setting Up Custom Metrics with Echo

While Echo's built-in logging provides basic information, you might want to collect more detailed metrics. Let's implement a custom middleware that captures response time, status code, and other useful data:

go
package main

import (
"github.com/labstack/echo/v4"
"time"
)

// MetricsMiddleware collects performance metrics
func MetricsMiddleware(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
start := time.Now()

// Process request
err := next(c)

// Calculate response time
responseTime := time.Since(start)

// Get request details
method := c.Request().Method
path := c.Path()
status := c.Response().Status

// In a real application, you would send this data to your metrics system
// For this example, we'll just log it
c.Logger().Infof(
"method=%s path=%s status=%d response_time=%s",
method, path, status, responseTime,
)

return err
}
}

func main() {
e := echo.New()

// Apply our metrics middleware
e.Use(MetricsMiddleware)

e.GET("/", func(c echo.Context) error {
return c.String(200, "Hello, World!")
})

e.GET("/slow", func(c echo.Context) error {
// Simulate slow processing
time.Sleep(200 * time.Millisecond)
return c.String(200, "Slow response")
})

e.Logger.Fatal(e.Start(":1323"))
}

Output when visiting /slow:

INFO method=GET path=/slow status=200 response_time=200.325ms

Integrating with Prometheus for Advanced Monitoring

Prometheus is a popular open-source monitoring system that works well with Go applications. Let's see how to integrate it with Echo:

First, install the required packages:

bash
go get github.com/prometheus/client_golang/prometheus
go get github.com/prometheus/client_golang/prometheus/promhttp

Then, implement Prometheus metrics in your Echo application:

go
package main

import (
"github.com/labstack/echo/v4"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"net/http"
"time"
)

var (
// Define Prometheus metrics
httpRequestsTotal = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "http_requests_total",
Help: "Total number of HTTP requests",
},
[]string{"method", "path", "status"},
)

httpRequestDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "http_request_duration_seconds",
Help: "Duration of HTTP requests in seconds",
Buckets: prometheus.DefBuckets,
},
[]string{"method", "path", "status"},
)
)

func init() {
// Register metrics with Prometheus
prometheus.MustRegister(httpRequestsTotal)
prometheus.MustRegister(httpRequestDuration)
}

// PrometheusMiddleware collects HTTP metrics for Prometheus
func PrometheusMiddleware(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
start := time.Now()

// Process request
err := next(c)

// Skip metrics for the metrics endpoint itself
if c.Path() == "/metrics" {
return err
}

// Record metrics after processing
duration := time.Since(start).Seconds()
statusCode := c.Response().Status
method := c.Request().Method
path := c.Path()

// Update Prometheus metrics
httpRequestsTotal.WithLabelValues(method, path, string(statusCode)).Inc()
httpRequestDuration.WithLabelValues(method, path, string(statusCode)).Observe(duration)

return err
}
}

func main() {
e := echo.New()

// Apply Prometheus middleware
e.Use(PrometheusMiddleware)

// Expose Prometheus metrics endpoint
e.GET("/metrics", echo.WrapHandler(promhttp.Handler()))

// Regular routes
e.GET("/", func(c echo.Context) error {
return c.String(200, "Hello, World!")
})

e.GET("/api/users", func(c echo.Context) error {
time.Sleep(100 * time.Millisecond) // Simulate database query
return c.JSON(200, map[string]string{"status": "success"})
})

e.Logger.Fatal(e.Start(":1323"))
}

With this setup, your Echo application exposes metrics at /metrics that Prometheus can scrape. These metrics include:

  • Total number of requests by method, path, and status code
  • Request duration histograms for performance analysis

When you visit /metrics, you'll see output similar to:

# HELP http_request_duration_seconds Duration of HTTP requests in seconds
# TYPE http_request_duration_seconds histogram
http_request_duration_seconds_bucket{method="GET",path="/",status="200",le="0.005"} 1
...
# HELP http_requests_total Total number of HTTP requests
# TYPE http_requests_total counter
http_requests_total{method="GET",path="/",status="200"} 1
http_requests_total{method="GET",path="/api/users",status="200"} 2

Profiling Echo Applications

Go includes built-in profiling tools that can help identify performance bottlenecks in your Echo application. Let's integrate the net/http/pprof package:

go
package main

import (
"github.com/labstack/echo/v4"
"net/http"
"net/http/pprof"
"time"
)

func main() {
e := echo.New()

// Regular routes
e.GET("/", func(c echo.Context) error {
return c.String(200, "Hello, World!")
})

// Register pprof handlers
pprofGroup := e.Group("/debug/pprof")
pprofGroup.GET("/", echo.WrapHandler(http.HandlerFunc(pprof.Index)))
pprofGroup.GET("/cmdline", echo.WrapHandler(http.HandlerFunc(pprof.Cmdline)))
pprofGroup.GET("/profile", echo.WrapHandler(http.HandlerFunc(pprof.Profile)))
pprofGroup.GET("/symbol", echo.WrapHandler(http.HandlerFunc(pprof.Symbol)))
pprofGroup.GET("/trace", echo.WrapHandler(http.HandlerFunc(pprof.Trace)))
pprofGroup.GET("/heap", echo.WrapHandler(http.HandlerFunc(pprof.Handler("heap").ServeHTTP)))
pprofGroup.GET("/goroutine", echo.WrapHandler(http.HandlerFunc(pprof.Handler("goroutine").ServeHTTP)))
pprofGroup.GET("/block", echo.WrapHandler(http.HandlerFunc(pprof.Handler("block").ServeHTTP)))
pprofGroup.GET("/threadcreate", echo.WrapHandler(http.HandlerFunc(pprof.Handler("threadcreate").ServeHTTP)))

e.Logger.Fatal(e.Start(":1323"))
}

Now you can access profiling data at /debug/pprof/ and use Go's profiling tools:

bash
# Capture CPU profile
go tool pprof http://localhost:1323/debug/pprof/profile

# Capture memory profile
go tool pprof http://localhost:1323/debug/pprof/heap

Real-World Example: Monitoring a REST API

Let's put everything together in a more complete example:

go
package main

import (
"context"
"github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
"net/http"
"net/http/pprof"
"os"
"os/signal"
"time"
)

var (
// Define Prometheus metrics
httpRequestsTotal = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "api_http_requests_total",
Help: "Total number of HTTP requests",
},
[]string{"method", "endpoint", "status"},
)

httpRequestDuration = prometheus.NewHistogramVec(
prometheus.HistogramOpts{
Name: "api_http_request_duration_seconds",
Help: "Duration of HTTP requests in seconds",
Buckets: []float64{0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 2, 5, 10},
},
[]string{"method", "endpoint", "status"},
)

activeRequests = prometheus.NewGauge(
prometheus.GaugeOpts{
Name: "api_active_requests",
Help: "Number of active requests",
},
)
)

func init() {
// Register metrics with Prometheus
prometheus.MustRegister(httpRequestsTotal)
prometheus.MustRegister(httpRequestDuration)
prometheus.MustRegister(activeRequests)
}

// PrometheusMiddleware collects HTTP metrics for Prometheus
func PrometheusMiddleware(next echo.HandlerFunc) echo.HandlerFunc {
return func(c echo.Context) error {
if c.Path() == "/metrics" {
return next(c)
}

activeRequests.Inc()
defer activeRequests.Dec()

start := time.Now()

// Process request
err := next(c)

// Record metrics after processing
duration := time.Since(start).Seconds()
status := c.Response().Status
method := c.Request().Method
endpoint := c.Path()

statusStr := string(status)

// Update Prometheus metrics
httpRequestsTotal.WithLabelValues(method, endpoint, statusStr).Inc()
httpRequestDuration.WithLabelValues(method, endpoint, statusStr).Observe(duration)

return err
}
}

func main() {
e := echo.New()

// Middleware
e.Use(middleware.Recover())
e.Use(middleware.Logger())
e.Use(PrometheusMiddleware)

// Rate limiting to prevent abuse
e.Use(middleware.RateLimiter(middleware.NewRateLimiterMemoryStore(20)))

// Regular API routes
e.GET("/api/users", getUsers)
e.GET("/api/users/:id", getUser)
e.POST("/api/users", createUser)

// Monitoring endpoints
e.GET("/metrics", echo.WrapHandler(promhttp.Handler()))

// Profiling endpoints
pprofGroup := e.Group("/debug/pprof")
pprofGroup.GET("/", echo.WrapHandler(http.HandlerFunc(pprof.Index)))
pprofGroup.GET("/cmdline", echo.WrapHandler(http.HandlerFunc(pprof.Cmdline)))
pprofGroup.GET("/profile", echo.WrapHandler(http.HandlerFunc(pprof.Profile)))
pprofGroup.GET("/symbol", echo.WrapHandler(http.HandlerFunc(pprof.Symbol)))
pprofGroup.GET("/trace", echo.WrapHandler(http.HandlerFunc(pprof.Trace)))
pprofGroup.GET("/heap", echo.WrapHandler(http.HandlerFunc(pprof.Handler("heap").ServeHTTP)))
pprofGroup.GET("/goroutine", echo.WrapHandler(http.HandlerFunc(pprof.Handler("goroutine").ServeHTTP)))

// Start server with graceful shutdown
go func() {
if err := e.Start(":1323"); err != nil && err != http.ErrServerClosed {
e.Logger.Fatal("shutting down the server")
}
}()

// Wait for interrupt signal to gracefully shut down the server
quit := make(chan os.Signal, 1)
signal.Notify(quit, os.Interrupt)
<-quit

ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
defer cancel()
if err := e.Shutdown(ctx); err != nil {
e.Logger.Fatal(err)
}
}

// Handler functions
func getUsers(c echo.Context) error {
// Simulate database query
time.Sleep(50 * time.Millisecond)

users := []map[string]interface{}{
{"id": 1, "name": "Alice"},
{"id": 2, "name": "Bob"},
}

return c.JSON(http.StatusOK, users)
}

func getUser(c echo.Context) error {
// Simulate database query
time.Sleep(30 * time.Millisecond)

user := map[string]interface{}{
"id": c.Param("id"),
"name": "Alice",
}

return c.JSON(http.StatusOK, user)
}

func createUser(c echo.Context) error {
// Simulate processing time
time.Sleep(100 * time.Millisecond)

return c.JSON(http.StatusCreated, map[string]string{"status": "user created"})
}

This example demonstrates a comprehensive approach to monitoring an Echo application, including:

  • Prometheus metrics for request rates, latencies, and active requests
  • Performance profiling endpoints
  • Graceful shutdown to avoid disrupting active connections
  • Realistic API handlers with simulated processing times

Visualizing Performance Data with Grafana

After collecting metrics with Prometheus, you can visualize them using Grafana. Here's a simple dashboard configuration you might use:

  1. Set up a Prometheus data source in Grafana
  2. Create a dashboard with panels for:
    • Request rate (requests per second)
    • Response time percentiles (p50, p90, p99)
    • Error rate
    • Active requests

Interpreting Performance Data

When analyzing your Echo application's performance, look for:

  1. Unusual latency spikes: May indicate resource contention or external service issues
  2. Increasing error rates: Could signal application bugs or infrastructure problems
  3. Growing memory usage: Potential memory leaks
  4. High CPU usage: Inefficient algorithms or need for code optimization
  5. Resource saturation: When your application hits resource limits

Summary

Effective performance monitoring is essential for maintaining healthy Echo applications. In this guide, we've covered:

  • Basic metrics everyone should monitor (response time, request rate, error rate)
  • How to implement custom metrics collection in Echo
  • Integrating with Prometheus for comprehensive monitoring
  • Profiling Echo applications to identify bottlenecks
  • Building a real-world monitoring setup for a REST API
  • Visualizing and interpreting performance data

By implementing these monitoring practices, you'll be able to ensure your Echo applications perform well under various conditions and quickly identify and resolve issues when they arise.

Additional Resources

Exercises

  1. Implement the Prometheus middleware in an existing Echo application and observe the metrics.
  2. Create a custom middleware that logs database query execution times.
  3. Set up a Grafana dashboard for your Echo application showing key performance metrics.
  4. Use pprof to identify a performance bottleneck in an Echo handler.
  5. Implement distributed tracing using Jaeger or Zipkin to monitor requests across multiple services.


If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)