Echo Load Testing
Introduction
Load testing is a critical component of application development that helps ensure your Echo web applications can handle expected (and unexpected) traffic loads. Unlike simple functional testing that verifies your application works correctly, load testing evaluates how your application performs under stress, identifying bottlenecks before they affect real users.
In this guide, we'll explore how to conduct effective load testing for your Echo applications, starting with the basics and working toward more advanced techniques.
What is Load Testing?
Load testing is the practice of simulating real-world load on your application to evaluate its performance characteristics. For Echo applications, this typically means:
- Generating concurrent HTTP requests to various endpoints
- Measuring response times under different loads
- Identifying bottlenecks in your application
- Determining maximum capacity before degradation occurs
Why Load Test Your Echo Application?
- Prevent production surprises: Discover performance issues before your users do
- Establish baseline metrics: Understand normal performance to detect regressions
- Capacity planning: Determine how much infrastructure you need
- Optimize performance: Identify and fix bottlenecks
- Validate SLAs: Ensure your application meets service level agreements
Basic Load Testing Tools
1. Apache Bench (ab)
Apache Bench is a simple command-line tool that's perfect for beginners:
# Install on Ubuntu/Debian
sudo apt-get install apache2-utils
# Install on macOS
brew install httpd
# Basic usage (send 1000 requests, 100 concurrently)
ab -n 1000 -c 100 http://localhost:1323/api/users
Example output:
This is ApacheBench, Version 2.3 <$Revision: 1879490 $>
...
Concurrency Level: 100
Time taken for tests: 2.350 seconds
Complete requests: 1000
Failed requests: 0
...
Requests per second: 425.53 [#/sec] (mean)
Time per request: 234.999 [ms] (mean)
...
2. Hey (formerly known as Boom)
Hey is a modern HTTP load generator:
# Install using Go
go install github.com/rakyll/hey@latest
# Basic usage (send 1000 requests, 100 concurrently)
hey -n 1000 -c 100 http://localhost:1323/api/users
Example output:
Summary:
Total: 2.7553 secs
Slowest: 1.8852 secs
Fastest: 0.0147 secs
Average: 0.2615 secs
Requests/sec: 362.9472
Response time histogram:
0.015 [1] |
0.202 [533] |■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■■
0.389 [254] |■■■■■■■■■■■■■■■
0.576 [130] |■■■■■■■■
...
Creating a Test Echo Application
Before diving into more complex load testing, let's create a simple Echo application that we'll use for testing:
package main
import (
"net/http"
"time"
"github.com/labstack/echo/v4"
"github.com/labstack/echo/v4/middleware"
)
func main() {
e := echo.New()
// Add middleware
e.Use(middleware.Logger())
e.Use(middleware.Recover())
// Routes
e.GET("/", func(c echo.Context) error {
return c.String(http.StatusOK, "Hello, World!")
})
e.GET("/fast", func(c echo.Context) error {
return c.JSON(http.StatusOK, map[string]string{
"message": "This is a fast endpoint",
})
})
e.GET("/slow", func(c echo.Context) error {
// Simulate a slow operation
time.Sleep(500 * time.Millisecond)
return c.JSON(http.StatusOK, map[string]string{
"message": "This is a slow endpoint",
})
})
e.GET("/db-simulation", func(c echo.Context) error {
// Simulate a database operation with random response time
time.Sleep(time.Duration(100+time.Now().UnixNano()%400) * time.Millisecond)
return c.JSON(http.StatusOK, map[string]string{
"message": "Database operation completed",
})
})
// Start server
e.Logger.Fatal(e.Start(":1323"))
}
Save this as server.go
and run it with go run server.go
. Now we have different endpoints to test with varying response characteristics.
Advanced Load Testing with k6
k6 is a modern load testing tool that allows for creating more complex test scenarios. It's particularly useful for Echo applications as it provides detailed metrics and supports JavaScript for test scripting.
Installation
# macOS
brew install k6
# Windows (with Chocolatey)
choco install k6
# Linux
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys C5AD17C747E3415A3642D57D77C6C491D6AC1D69
echo "deb https://dl.k6.io/deb stable main" | sudo tee /etc/apt/sources.list.d/k6.list
sudo apt-get update
sudo apt-get install k6
Basic k6 Test Script
Create a file named load-test.js
:
import http from 'k6/http';
import { sleep, check } from 'k6';
import { Rate } from 'k6/metrics';
// Custom metric
const errorRate = new Rate('errors');
export const options = {
stages: [
{ duration: '30s', target: 20 }, // Ramp up to 20 users
{ duration: '1m', target: 20 }, // Stay at 20 users for 1 minute
{ duration: '30s', target: 0 }, // Ramp down to 0 users
],
thresholds: {
http_req_duration: ['p(95)<500'], // 95% of requests should be below 500ms
errors: ['rate<0.1'], // Error rate should be less than 10%
},
};
export default function () {
// Test the fast endpoint
const fastRes = http.get('http://localhost:1323/fast');
const fastCheck = check(fastRes, {
'status is 200': (r) => r.status === 200,
'response time < 200ms': (r) => r.timings.duration < 200,
});
errorRate.add(!fastCheck);
// Test the slow endpoint
const slowRes = http.get('http://localhost:1323/slow');
const slowCheck = check(slowRes, {
'status is 200': (r) => r.status === 200,
'response time < 800ms': (r) => r.timings.duration < 800,
});
errorRate.add(!slowCheck);
// Add some randomness to the test
sleep(1 + Math.random());
}
Run the test with:
k6 run load-test.js
The output will include detailed statistics about response times, request rates, and whether your thresholds were met.
Real-World Load Testing Strategies
Now that we understand the basics, let's explore more realistic load testing approaches:
1. Gradual Ramp-up Testing
This approach gradually increases the load to identify at what point your system starts to degrade:
export const options = {
stages: [
{ duration: '2m', target: 100 }, // Ramp up to 100 users over 2 minutes
{ duration: '5m', target: 100 }, // Stay at 100 users for 5 minutes
{ duration: '2m', target: 200 }, // Ramp up to 200 users over 2 minutes
{ duration: '5m', target: 200 }, // Stay at 200 users for 5 minutes
{ duration: '2m', target: 300 }, // Ramp up to 300 users over 2 minutes
{ duration: '5m', target: 300 }, // Stay at 300 users for 5 minutes
{ duration: '2m', target: 0 }, // Ramp down to 0 users
],
};
2. Spike Testing
Test how your application handles sudden spikes in traffic:
export const options = {
stages: [
{ duration: '10s', target: 10 }, // Baseline
{ duration: '1m', target: 10 }, // Maintain baseline
{ duration: '10s', target: 500 }, // Spike to 500 users
{ duration: '3m', target: 500 }, // Stay at 500 for 3 minutes
{ duration: '10s', target: 10 }, // Scale back to baseline
{ duration: '3m', target: 10 }, // Maintain baseline
{ duration: '10s', target: 0 }, // Scale down to 0
],
};
3. Endurance Testing
Test how your application performs over an extended period:
export const options = {
stages: [
{ duration: '5m', target: 100 }, // Ramp up to 100 users
{ duration: '2h', target: 100 }, // Stay at 100 users for 2 hours
{ duration: '5m', target: 0 }, // Scale down to 0 users
],
};
4. Realistic User Scenarios
Simulate actual user flows through your application:
export default function () {
// Step 1: User visits homepage
const homeRes = http.get('http://localhost:1323/');
check(homeRes, {
'homepage status is 200': (r) => r.status === 200,
});
sleep(Math.random() * 3);
// Step 2: User logs in
const loginRes = http.post('http://localhost:1323/login', {
username: 'testuser',
password: 'password123'
});
check(loginRes, {
'login successful': (r) => r.status === 200 && r.json('token') !== '',
});
// Extract token from response
const token = loginRes.json('token');
sleep(Math.random() * 2);
// Step 3: User accesses protected resource
const headers = { 'Authorization': `Bearer ${token}` };
const protectedRes = http.get('http://localhost:1323/dashboard', { headers });
check(protectedRes, {
'can access dashboard': (r) => r.status === 200,
});
sleep(Math.random() * 5);
}
Analyzing Load Test Results
After running load tests, focus on these key metrics:
- Response time: How quickly your application responds (mean, median, 90th/95th/99th percentiles)
- Throughput: Requests per second your application can handle
- Error rate: Percentage of requests that fail
- Resource utilization: CPU, memory, network, and disk usage during testing
Common Bottlenecks in Echo Applications
-
Database queries: Inefficient queries or missing indexes
- Solution: Use the Echo context's Logger to time database operations and optimize slow queries
-
External service calls: APIs that your application depends on
- Solution: Implement timeouts, circuit breakers, and consider caching responses
-
Middleware overhead: Too many middleware functions
- Solution: Profile middleware execution time and only use what's necessary
-
Template rendering: Slow HTML template rendering
- Solution: Cache templates or consider server-side rendering optimizations
-
Resource contention: Too many goroutines competing for resources
- Solution: Use appropriate connection pooling and limit concurrent execution
Continuous Load Testing
Integrate load testing into your CI/CD pipeline to catch performance regressions early:
# Example GitHub Actions workflow
name: Performance Testing
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
k6_load_test:
name: k6 Load Test
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
- name: Start Echo Server
run: |
go run server.go &
sleep 5
- name: Run k6 Load Test
uses: grafana/k6-[email protected]
with:
filename: load-test.js
- name: Store k6 results
if: always()
uses: actions/upload-artifact@v2
with:
name: k6-report
path: summary.json
Best Practices for Echo Load Testing
-
Test in a production-like environment: Use similar hardware, configurations, and data volumes
-
Test one component at a time: Isolate components to identify specific bottlenecks
-
Simulate realistic user behavior: Add think time between requests and randomize inputs
-
Monitor system resources: Track CPU, memory, network, and disk usage during tests
-
Test database performance separately: Database bottlenecks often hide application issues
-
Include error scenarios: Test how your application handles failures and errors
-
Use proper test data: Test with realistic data volumes and distributions
-
Benchmark against requirements: Have clear performance goals before testing
Summary
Load testing is an essential practice for ensuring your Echo applications can handle the traffic demands of your users. By using tools like Apache Bench, Hey, and k6, you can simulate realistic loads and identify performance bottlenecks before they affect your users.
Remember these key points:
- Start with simple tests and gradually increase complexity
- Test for various scenarios: gradual ramps, spikes, and endurance
- Monitor key metrics like response time, throughput, and error rates
- Address common bottlenecks in databases, external services, and middleware
- Integrate load testing into your continuous integration pipeline
With regular load testing as part of your development process, you can build Echo applications that remain fast, reliable, and responsive even under heavy load.
Additional Resources
- k6 Documentation
- Echo Performance Best Practices
- Golang Performance Optimization
- Database Performance Tuning
Exercises
- Create a load test that gradually increases to 50 concurrent users for your own Echo application
- Implement a realistic user journey test that simulates a login, several page views, and a logout
- Set up a continuous load testing pipeline using GitHub Actions or Jenkins
- Compare the performance of two different implementations of the same endpoint
- Identify and fix a performance bottleneck in an existing Echo application
If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)