Skip to main content

Echo Monitoring Setup

Introduction

Once you've deployed your Echo application to production, it's crucial to set up proper monitoring to ensure it runs smoothly and to quickly identify any issues that may arise. Monitoring helps you track your application's performance, resource usage, error rates, and other vital metrics that indicate the health of your system.

In this guide, we'll learn how to set up monitoring for your Echo applications using popular tools like Prometheus and Grafana. We'll also explore how to implement logging and tracing to achieve comprehensive observability for your Echo applications.

Why Monitoring Matters

Before diving into the technical setup, let's understand why monitoring is essential:

  • Detect issues early: Identify problems before they affect your users
  • Performance optimization: Gather data to improve application performance
  • Resource planning: Understand usage patterns to allocate resources effectively
  • Debugging: Collect information that helps troubleshoot issues
  • Business insights: Gain visibility into application usage and behavior

Setting Up Basic Logging

The first step in monitoring is implementing proper logging. Echo provides built-in middleware for logging HTTP requests and responses.

Basic Logger Setup

go
package main

import (
"github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware"
)

func main() {
e := echo.New()

// Add logger middleware
e.Use(middleware.Logger())

e.GET("/", func(c echo.Context) error {
return c.String(200, "Hello, World!")
})

e.Logger.Fatal(e.Start(":1323"))
}

When you run this application and send requests, you'll see log output like:

2023/07/15 14:37:06 GET / 200 2.25ms 13B
2023/07/15 14:37:07 GET /health 404 0.71ms 24B

Customizing the Logger

For more control, you can customize the logger format:

go
e.Use(middleware.LoggerWithConfig(middleware.LoggerConfig{
Format: "time=${time_rfc3339}, method=${method}, uri=${uri}, status=${status}, latency=${latency_human}\n",
}))

This will produce logs like:

time=2023-07-15T14:38:21Z, method=GET, uri=/, status=200, latency=1.21ms

Implementing Metrics with Prometheus

Prometheus is an open-source monitoring system that collects and stores metrics as time-series data. Let's integrate it with Echo.

Step 1: Add Required Libraries

First, add the Echo Prometheus middleware:

bash
go get github.com/labstack/echo-contrib/prometheus

Step 2: Implement Prometheus Middleware

go
package main

import (
"github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware"
"github.com/labstack/echo-contrib/prometheus"
)

func main() {
e := echo.New()

// Add logger middleware
e.Use(middleware.Logger())

// Add Prometheus middleware
p := prometheus.NewPrometheus("echo", nil)
p.Use(e)

e.GET("/", func(c echo.Context) error {
return c.String(200, "Hello, World!")
})

e.Logger.Fatal(e.Start(":1323"))
}

With this setup, Prometheus metrics will be exposed at /metrics endpoint. You'll see output like:

# HELP echo_request_duration_seconds The HTTP request latencies in seconds.
# TYPE echo_request_duration_seconds summary
echo_request_duration_seconds{code="200",method="GET",url="/"} 0.000151461
# HELP echo_requests_total How many HTTP requests processed, partitioned by status code and HTTP method.
# TYPE echo_requests_total counter
echo_requests_total{code="200",method="GET",url="/"} 1

Step 3: Configure Prometheus Server

Create a prometheus.yml configuration file:

yaml
global:
scrape_interval: 15s

scrape_configs:
- job_name: 'echo'
static_configs:
- targets: ['localhost:1323']

Start Prometheus with this configuration:

bash
prometheus --config.file=prometheus.yml

Visualizing Metrics with Grafana

To visualize your metrics effectively, Grafana is an excellent choice.

Step 1: Install and Start Grafana

Download Grafana from grafana.com and start it:

bash
grafana-server --config=/path/to/grafana.ini

Step 2: Add Prometheus as a Data Source

  1. Open Grafana (default: http://localhost:3000)
  2. Log in (default: admin/admin)
  3. Go to Configuration > Data sources
  4. Add Prometheus data source
  5. Set URL to http://localhost:9090
  6. Click "Save & Test"

Step 3: Create a Dashboard

Create a new dashboard with panels for metrics like:

  • Request count by endpoint
  • Response time percentiles
  • Error rate
  • HTTP status codes distribution

Here's an example query for request count:

rate(echo_requests_total[5m])

Custom Application Metrics

Beyond HTTP metrics, you might want to track business or application-specific metrics.

Creating Custom Counters

go
package main

import (
"github.com/labstack/echo/v4"
"github.com/prometheus/client_golang/prometheus"
"github.com/prometheus/client_golang/prometheus/promhttp"
)

var (
loginAttempts = prometheus.NewCounterVec(
prometheus.CounterOpts{
Name: "app_login_attempts_total",
Help: "Total number of login attempts",
},
[]string{"successful"},
)
)

func init() {
prometheus.MustRegister(loginAttempts)
}

func main() {
e := echo.New()

// Expose metrics endpoint
e.GET("/metrics", echo.WrapHandler(promhttp.Handler()))

e.POST("/login", func(c echo.Context) error {
// Your login logic
success := true // Determined by your logic

if success {
loginAttempts.WithLabelValues("true").Inc()
return c.String(200, "Login successful")
} else {
loginAttempts.WithLabelValues("false").Inc()
return c.String(401, "Login failed")
}
})

e.Logger.Fatal(e.Start(":1323"))
}

Other Useful Metric Types

  • Gauge: A value that can go up and down (like memory usage)
  • Histogram: Samples observations and counts them in configurable buckets
  • Summary: Similar to histogram but calculates configurable quantiles

Distributed Tracing

For complex applications with multiple services, distributed tracing helps understand the flow of requests through your system.

Setting Up Tracing with OpenTelemetry

go
package main

import (
"context"
"github.com/labstack/echo/v4"
"go.opentelemetry.io/otel"
"go.opentelemetry.io/otel/exporters/jaeger"
"go.opentelemetry.io/otel/sdk/resource"
sdktrace "go.opentelemetry.io/otel/sdk/trace"
semconv "go.opentelemetry.io/otel/semconv/v1.4.0"
"log"
)

func initTracer() *sdktrace.TracerProvider {
exporter, err := jaeger.New(jaeger.WithCollectorEndpoint(
jaeger.WithEndpoint("http://localhost:14268/api/traces"),
))
if err != nil {
log.Fatal(err)
}

tracerProvider := sdktrace.NewTracerProvider(
sdktrace.WithSampler(sdktrace.AlwaysSample()),
sdktrace.WithBatcher(exporter),
sdktrace.WithResource(resource.NewWithAttributes(
semconv.SchemaURL,
semconv.ServiceNameKey.String("echo-service"),
)),
)

otel.SetTracerProvider(tracerProvider)
return tracerProvider
}

func main() {
tp := initTracer()
defer func() {
if err := tp.Shutdown(context.Background()); err != nil {
log.Printf("Error shutting down tracer provider: %v", err)
}
}()

tracer := otel.Tracer("echo-server")

e := echo.New()

e.GET("/", func(c echo.Context) error {
ctx := c.Request().Context()
_, span := tracer.Start(ctx, "home-handler")
defer span.End()

// Your handler logic
return c.String(200, "Hello, World!")
})

e.Logger.Fatal(e.Start(":1323"))
}

Setting Up Health Checks

Health checks are crucial for container orchestration systems like Kubernetes.

go
package main

import (
"github.com/labstack/echo/v4"
"net/http"
)

func main() {
e := echo.New()

// Basic health check
e.GET("/health", func(c echo.Context) error {
return c.String(http.StatusOK, "OK")
})

// Detailed health check
e.GET("/health/detailed", func(c echo.Context) error {
// Check database connection
dbOK := checkDatabaseConnection()

// Check cache connection
cacheOK := checkCacheConnection()

if dbOK && cacheOK {
return c.JSON(http.StatusOK, map[string]interface{}{
"status": "healthy",
"database": "connected",
"cache": "connected",
})
}

statusCode := http.StatusServiceUnavailable
return c.JSON(statusCode, map[string]interface{}{
"status": "unhealthy",
"database": dbHealthStatus(dbOK),
"cache": cacheHealthStatus(cacheOK),
})
})

e.Logger.Fatal(e.Start(":1323"))
}

func checkDatabaseConnection() bool {
// Implement actual DB connection check
return true
}

func checkCacheConnection() bool {
// Implement actual cache connection check
return true
}

func dbHealthStatus(ok bool) string {
if ok {
return "connected"
}
return "disconnected"
}

func cacheHealthStatus(ok bool) string {
if ok {
return "connected"
}
return "disconnected"
}

Setting Up Alerts

Once you have metrics in Prometheus, you can set up alerts to get notified when issues occur.

Example Alert Rules in Prometheus

Create an alerts.yml file:

yaml
groups:
- name: echo_alerts
rules:
- alert: HighErrorRate
expr: rate(echo_requests_total{code=~"5.."}[5m]) / rate(echo_requests_total[5m]) > 0.05
for: 2m
labels:
severity: critical
annotations:
summary: "High error rate detected"
description: "Error rate is above 5% for 2 minutes (current value: {{ $value }})"

- alert: SlowResponses
expr: histogram_quantile(0.95, rate(echo_request_duration_seconds_bucket[5m])) > 0.5
for: 5m
labels:
severity: warning
annotations:
summary: "Slow response times detected"
description: "95th percentile of response time is above 500ms for 5 minutes"

Add this to your Prometheus configuration:

yaml
rule_files:
- "alerts.yml"

Complete Monitoring Stack Example

For a comprehensive monitoring setup, you might use:

  1. Echo with Prometheus metrics
  2. Prometheus for metrics collection
  3. Grafana for visualization
  4. Jaeger for distributed tracing
  5. Alertmanager for alerting

You can set this up with Docker Compose for ease of deployment:

yaml
version: '3'

services:
app:
build: .
ports:
- "1323:1323"
depends_on:
- prometheus
- jaeger

prometheus:
image: prom/prometheus
ports:
- "9090:9090"
volumes:
- ./prometheus.yml:/etc/prometheus/prometheus.yml
- ./alerts.yml:/etc/prometheus/alerts.yml

grafana:
image: grafana/grafana
ports:
- "3000:3000"
depends_on:
- prometheus

jaeger:
image: jaegertracing/all-in-one
ports:
- "16686:16686"
- "14268:14268"

alertmanager:
image: prom/alertmanager
ports:
- "9093:9093"
volumes:
- ./alertmanager.yml:/etc/alertmanager/alertmanager.yml

Summary

Setting up proper monitoring for your Echo application is essential for maintaining application health and quickly responding to issues. In this guide, we covered:

  1. Basic logging with Echo's built-in middleware
  2. Metrics collection using Prometheus
  3. Metrics visualization with Grafana
  4. Custom application metrics for business requirements
  5. Distributed tracing with OpenTelemetry and Jaeger
  6. Health checks to ensure application availability
  7. Alerting to get notified of issues

By implementing these monitoring solutions, you'll have better visibility into your Echo application's performance and behavior, making it easier to maintain and troubleshoot in production environments.

Additional Resources

Exercises

  1. Set up basic Prometheus monitoring for an existing Echo application
  2. Create a Grafana dashboard showing request count, latency, and error rate
  3. Implement a custom metric that tracks a business-specific event in your application
  4. Configure an alert that triggers when your application's error rate exceeds 5%
  5. Add distributed tracing to an Echo application that calls an external service


If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)