Skip to main content

Echo API Performance

Introduction

Performance is a critical aspect of any web application or API. An efficiently performing API not only provides a better user experience but also reduces operational costs and increases scalability. Echo is a high-performance, minimalist web framework for Go that provides excellent tools for building optimized APIs. In this guide, we'll explore how to monitor, measure, and improve the performance of your Echo API applications.

Why Performance Matters

Before diving into optimizations, let's understand why API performance is important:

  • User Experience: Faster APIs lead to more responsive applications
  • Cost Efficiency: Optimized APIs require less computing resources
  • Scalability: Well-performing APIs can handle more requests with the same infrastructure
  • SEO Ranking: For web applications, performance impacts search engine rankings
  • Battery Life: For mobile clients, efficient APIs conserve device battery

Performance Measurement Basics

Response Time Metrics

To improve your Echo API's performance, you need to measure it first:

go
package main

import (
"github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware"
"net/http"
"time"
)

func main() {
e := echo.New()

// Add the logger middleware with custom format
e.Use(middleware.LoggerWithConfig(middleware.LoggerConfig{
Format: "method=${method}, uri=${uri}, status=${status}, latency=${latency_human}\n",
}))

// Routes
e.GET("/api/users", getUsers)

e.Logger.Fatal(e.Start(":1323"))
}

func getUsers(c echo.Context) error {
// Simulate database operation
time.Sleep(100 * time.Millisecond)

return c.JSON(http.StatusOK, map[string]string{
"message": "Users retrieved successfully",
})
}

The middleware logs each request with its processing time (latency), which is a key performance indicator.

Load Testing with Hey

To simulate real-world traffic, use tools like hey for load testing:

bash
# Install hey
go install github.com/rakyll/hey@latest

# Run a load test (200 requests with 10 concurrent users)
hey -n 200 -c 10 http://localhost:1323/api/users

Sample output:

Summary:
Total: 2.5431 secs
Slowest: 0.1523 secs
Fastest: 0.1013 secs
Average: 0.1266 secs
Requests/sec: 78.6461

Response time histogram:
0.101 [1] |
0.106 [12] |■■■
0.111 [24] |■■■■■■
0.116 [37] |■■■■■■■■■
0.121 [41] |■■■■■■■■■■
0.126 [31] |■■■■■■■■
0.131 [23] |■■■■■■
0.137 [14] |■■■
0.142 [9] |■■
0.147 [5] |■
0.152 [3] |■

Common Performance Bottlenecks

1. Database Operations

Database interactions are often the biggest performance bottlenecks. Here's how to optimize them:

go
import (
"database/sql"
_ "github.com/go-sql-driver/mysql"
"github.com/labstack/echo/v4"
"time"
)

// Inefficient approach
func getUsersInefficient(c echo.Context) error {
db, _ := sql.Open("mysql", "user:password@/dbname")
defer db.Close() // Opening and closing connection for each request

rows, _ := db.Query("SELECT * FROM users") // Fetching all fields
// Process rows...
return c.JSON(http.StatusOK, users)
}

// Optimized approach
var dbPool *sql.DB

func initDB() {
var err error
dbPool, err = sql.Open("mysql", "user:password@/dbname")
if err != nil {
panic(err)
}

dbPool.SetMaxOpenConns(25)
dbPool.SetMaxIdleConns(25)
dbPool.SetConnMaxLifetime(5 * time.Minute)
}

func getUsersEfficient(c echo.Context) error {
rows, _ := dbPool.Query("SELECT id, name FROM users") // Only needed fields
// Process rows...
return c.JSON(http.StatusOK, users)
}

2. Memory Management

Efficient memory usage is crucial for high-performance APIs:

go
// Inefficient - loading all data in memory
func getAllLogsInefficient(c echo.Context) error {
var allLogs []LogEntry
rows, _ := db.Query("SELECT * FROM logs")

// Loading all logs into memory at once
for rows.Next() {
var log LogEntry
rows.Scan(&log.ID, &log.Message, &log.Timestamp)
allLogs = append(allLogs, log)
}

return c.JSON(http.StatusOK, allLogs)
}

// Efficient - paginated approach
func getAllLogsEfficient(c echo.Context) error {
page, _ := strconv.Atoi(c.QueryParam("page"))
if page < 1 {
page = 1
}
pageSize := 100
offset := (page - 1) * pageSize

rows, _ := db.Query("SELECT * FROM logs LIMIT ? OFFSET ?", pageSize, offset)

var logs []LogEntry
for rows.Next() {
var log LogEntry
rows.Scan(&log.ID, &log.Message, &log.Timestamp)
logs = append(logs, log)
}

return c.JSON(http.StatusOK, logs)
}

Echo-Specific Performance Optimizations

Using Pre-compiled Templates

For APIs that render HTML, using pre-compiled templates improves performance:

go
import (
"github.com/labstack/echo/v4"
"html/template"
"io"
)

type Template struct {
templates *template.Template
}

func (t *Template) Render(w io.Writer, name string, data interface{}, c echo.Context) error {
return t.templates.ExecuteTemplate(w, name, data)
}

func main() {
e := echo.New()

// Pre-compile templates
t := &Template{
templates: template.Must(template.ParseGlob("views/*.html")),
}
e.Renderer = t

e.GET("/welcome", func(c echo.Context) error {
return c.Render(http.StatusOK, "welcome.html", map[string]interface{}{
"name": "John",
})
})

e.Logger.Fatal(e.Start(":1323"))
}

Implementing Caching

Add a simple in-memory cache for frequently accessed data:

go
package main

import (
"github.com/labstack/echo/v4"
"sync"
"time"
)

// Simple cache implementation
type Cache struct {
items map[string]cacheItem
mu sync.RWMutex
}

type cacheItem struct {
value interface{}
expiration time.Time
}

func NewCache() *Cache {
return &Cache{
items: make(map[string]cacheItem),
}
}

func (c *Cache) Set(key string, value interface{}, duration time.Duration) {
c.mu.Lock()
defer c.mu.Unlock()

c.items[key] = cacheItem{
value: value,
expiration: time.Now().Add(duration),
}
}

func (c *Cache) Get(key string) (interface{}, bool) {
c.mu.RLock()
defer c.mu.RUnlock()

item, exists := c.items[key]
if !exists {
return nil, false
}

if time.Now().After(item.expiration) {
return nil, false
}

return item.value, true
}

// Using the cache with Echo
var cache = NewCache()

func getPopularProducts(c echo.Context) error {
cacheKey := "popular_products"

// Try to get from cache first
if cachedProducts, found := cache.Get(cacheKey); found {
return c.JSON(http.StatusOK, cachedProducts)
}

// If not in cache, fetch from database
products, err := fetchProductsFromDB()
if err != nil {
return c.JSON(http.StatusInternalServerError, map[string]string{
"error": "Failed to fetch products",
})
}

// Store in cache for 5 minutes
cache.Set(cacheKey, products, 5*time.Minute)

return c.JSON(http.StatusOK, products)
}

Compressing HTTP Responses

Enable response compression to reduce bandwidth usage:

go
package main

import (
"github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware"
)

func main() {
e := echo.New()

// Add gzip compression middleware
e.Use(middleware.GzipWithConfig(middleware.GzipConfig{
Level: 5, // Compression level between 1 (best speed) and 9 (best compression)
}))

e.GET("/api/large-data", getLargeData)

e.Logger.Fatal(e.Start(":1323"))
}

func getLargeData(c echo.Context) error {
// Generate or fetch large data
largeData := generateLargeJSONData()

// The response will be automatically compressed
return c.JSON(http.StatusOK, largeData)
}

Real-World Performance Optimization Case Study

Let's walk through optimizing a user service API:

go
package main

import (
"context"
"database/sql"
"github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware"
_ "github.com/go-sql-driver/mysql"
"log"
"net/http"
"time"
)

type User struct {
ID int `json:"id"`
Username string `json:"username"`
Email string `json:"email"`
Bio string `json:"bio"`
}

var db *sql.DB
var userCache = make(map[int]cacheEntry)
var cacheMutex = &sync.RWMutex{}

type cacheEntry struct {
user User
timestamp time.Time
}

func main() {
// Initialize database connection
var err error
db, err = sql.Open("mysql", "user:password@/userdb")
if err != nil {
log.Fatal("Database connection failed:", err)
}

// Configure connection pool
db.SetMaxOpenConns(25)
db.SetMaxIdleConns(25)
db.SetConnMaxLifetime(5 * time.Minute)

e := echo.New()

// Middleware
e.Use(middleware.Logger())
e.Use(middleware.Recover())
e.Use(middleware.GzipWithConfig(middleware.GzipConfig{
Level: 5,
}))

// Routes
e.GET("/api/users/:id", getUserByID)
e.GET("/api/users", getUsers)

// Start server
e.Logger.Fatal(e.Start(":1323"))
}

func getUserByID(c echo.Context) error {
userID := c.Param("id")

// Parse the ID
id := 0
fmt.Sscanf(userID, "%d", &id)

// Check cache first
cacheMutex.RLock()
entry, found := userCache[id]
cacheMutex.RUnlock()

if found && time.Since(entry.timestamp) < 5*time.Minute {
return c.JSON(http.StatusOK, entry.user)
}

// Set timeout for database query
ctx, cancel := context.WithTimeout(context.Background(), 2*time.Second)
defer cancel()

var user User
err := db.QueryRowContext(ctx,
"SELECT id, username, email, bio FROM users WHERE id = ?",
id).Scan(&user.ID, &user.Username, &user.Email, &user.Bio)

if err != nil {
if err == sql.ErrNoRows {
return c.JSON(http.StatusNotFound, map[string]string{
"error": "User not found",
})
}
return c.JSON(http.StatusInternalServerError, map[string]string{
"error": "Database error",
})
}

// Update cache
cacheMutex.Lock()
userCache[id] = cacheEntry{
user: user,
timestamp: time.Now(),
}
cacheMutex.Unlock()

return c.JSON(http.StatusOK, user)
}

func getUsers(c echo.Context) error {
limit := 10 // Default page size
offset := 0

// Parse query parameters
if limitParam := c.QueryParam("limit"); limitParam != "" {
fmt.Sscanf(limitParam, "%d", &limit)
if limit > 100 {
limit = 100 // Cap maximum limit
}
}

if offsetParam := c.QueryParam("offset"); offsetParam != "" {
fmt.Sscanf(offsetParam, "%d", &offset)
}

// Set timeout for database query
ctx, cancel := context.WithTimeout(context.Background(), 3*time.Second)
defer cancel()

rows, err := db.QueryContext(ctx,
"SELECT id, username, email, bio FROM users LIMIT ? OFFSET ?",
limit, offset)

if err != nil {
return c.JSON(http.StatusInternalServerError, map[string]string{
"error": "Database error",
})
}
defer rows.Close()

users := []User{}
for rows.Next() {
var user User
if err := rows.Scan(&user.ID, &user.Username, &user.Email, &user.Bio); err != nil {
continue
}
users = append(users, user)
}

return c.JSON(http.StatusOK, users)
}

This implementation includes:

  1. Connection pooling for database efficiency
  2. In-memory caching for frequent user lookups
  3. Response compression with Gzip
  4. Query parameter limits to prevent excessive data loads
  5. Context timeouts to prevent slow query bottlenecks
  6. Proper error handling and HTTP status codes

Performance Monitoring

To continuously monitor your Echo API's performance in production:

Using Prometheus and Grafana

go
package main

import (
"github.com/labstack/echo/v4"
"github.com/labstack/echo-contrib/prometheus"
)

func main() {
e := echo.New()

// Enable metrics middleware
p := prometheus.NewPrometheus("echo", nil)
p.Use(e)

// Routes
e.GET("/api/users", getUsers)

e.Logger.Fatal(e.Start(":1323"))
}

With this integration, you can create dashboards in Grafana to track:

  • Request rates
  • Response times
  • Error rates
  • Resource usage

Performance Best Practices Summary

  1. Use connection pooling for databases and external services
  2. Implement caching for frequently accessed data
  3. Optimize database queries by fetching only needed columns
  4. Paginate API results to limit data size
  5. Use compression for large responses
  6. Set timeouts for all external operations
  7. Profile your application regularly to identify bottlenecks
  8. Monitor performance metrics in production
  9. Use Go's concurrency features for parallel operations
  10. Pre-compile templates and reuse objects where possible

Exercises

  1. Basic Performance Measurement: Implement a custom middleware that logs detailed timing information about your API endpoints.

  2. Caching Implementation: Add Redis caching to the user API example above, replacing the in-memory cache.

  3. Load Testing: Set up a load test using Hey or Apache Benchmark to measure your API's performance under load.

  4. Optimization Challenge: Take an existing API endpoint that performs multiple database queries and optimize it to use a single query.

  5. Profiling: Use Go's pprof tools to profile your Echo application and identify CPU and memory bottlenecks.

Additional Resources

Remember that performance optimization should be data-driven. Always measure before and after making changes to ensure your optimizations are actually improving performance.



If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)