Skip to main content

Express Benchmarking

When building web applications with Express.js, understanding how your application performs under various conditions is crucial. Benchmarking allows you to measure your application's performance, identify bottlenecks, and make data-driven optimizations. This guide will walk you through the process of benchmarking your Express applications effectively.

What is Benchmarking?

Benchmarking is the process of measuring and evaluating the performance of your application. In the context of Express.js applications, we typically measure metrics such as:

  • Requests per second - How many requests your server can handle in one second
  • Latency - How long it takes to process a request
  • Memory usage - How much memory your application consumes
  • CPU usage - How much CPU resources your application requires

By benchmarking your Express application, you can establish performance baselines, identify performance regressions, and measure the impact of your optimizations.

Why Benchmark Your Express Application?

  • Identify performance bottlenecks
  • Validate performance improvements
  • Determine your application's capacity limits
  • Make informed decisions about scaling
  • Compare different implementation approaches

Tools for Benchmarking Express Applications

Let's explore some popular tools for benchmarking Express applications:

1. Apache Benchmark (ab)

Apache Benchmark is a simple command-line tool for benchmarking HTTP servers.

Installation:

  • On Ubuntu/Debian: sudo apt-get install apache2-utils
  • On macOS: brew install apache2-utils or it's already installed with macOS

Example usage:

bash
ab -n 1000 -c 100 http://localhost:3000/

This command sends 1000 requests with a concurrency level of 100 to the specified URL.

Sample output:

Concurrency Level:      100
Time taken for tests: 3.45 seconds
Complete requests: 1000
Failed requests: 0
Requests per second: 289.86 [#/sec] (mean)
Time per request: 344.99 [ms] (mean)
Transfer rate: 42.77 [Kbytes/sec] received

2. Autocannon

Autocannon is a HTTP/1.1 benchmarking tool written in Node.js, which makes it a natural fit for Express applications.

Installation:

bash
npm install -g autocannon

Example usage:

bash
autocannon -c 100 -d 10 http://localhost:3000/

This sends as many requests as possible in 10 seconds with 100 concurrent connections.

Sample output:

Running 10s test @ http://localhost:3000/
100 connections

┌─────────┬───────┬───────┬───────┬───────┬──────────┬─────────┬───────┐
│ Stat │ 2.5% │ 50% │ 97.5% │ 99% │ Avg │ Stdev │ Max │
├─────────┼───────┼───────┼───────┼───────┼──────────┼─────────┼───────┤
│ Latency │ 11 ms │ 22 ms │ 80 ms │ 95 ms │ 28.27 ms │ 23.4 ms │ 189 ms│
└─────────┴───────┴───────┴───────┴───────┴──────────┴─────────┴───────┘
┌───────────┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────┬─────────┐
│ Stat │ 1% │ 2.5% │ 50% │ 97.5% │ Avg │ Stdev │ Min │
├───────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┤
│ Req/Sec │ 3,463 │ 3,463 │ 3,863 │ 4,159 │ 3,893 │ 178 │ 3,462 │
├───────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┼─────────┤
│ Bytes/Sec │ 550 kB │ 550 kB │ 614 kB │ 660 kB │ 618 kB │ 28.3 kB │ 550 kB │
└───────────┴─────────┴─────────┴─────────┴─────────┴─────────┴─────────┴─────────┘

Req/Bytes counts sampled once per second.

38k requests in 10s, 6.18 MB read

3. wrk

wrk is a modern HTTP benchmarking tool capable of generating significant load.

Installation:

  • On Ubuntu/Debian: sudo apt-get install wrk
  • On macOS: brew install wrk

Example usage:

bash
wrk -t12 -c400 -d30s http://localhost:3000/

This runs a benchmark for 30 seconds, using 12 threads, and keeping 400 HTTP connections open.

Sample output:

Running 30s test @ http://localhost:3000/
12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 45.65ms 12.46ms 345.25ms 96.24%
Req/Sec 0.95k 130.39 1.92k 81.22%
342153 requests in 30.10s, 64.56MB read
Requests/sec: 11368.35
Transfer/sec: 2.14MB

Setting Up a Benchmark Framework

Let's create a simple Express application and set up a benchmark for it:

javascript
// app.js
const express = require('express');
const app = express();

app.get('/', (req, res) => {
res.send('Hello World!');
});

app.get('/api/users', (req, res) => {
// Simulate database delay
setTimeout(() => {
res.json([
{ id: 1, name: 'John' },
{ id: 2, name: 'Jane' },
{ id: 3, name: 'Bob' }
]);
}, 100);
});

app.get('/api/cpu-intensive', (req, res) => {
// Simulate CPU-intensive operation
let result = 0;
for (let i = 0; i < 1000000; i++) {
result += Math.random();
}
res.json({ result });
});

const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});

Now let's create a script to benchmark these endpoints:

javascript
// benchmark.js
const autocannon = require('autocannon');
const { PassThrough } = require('stream');

function run(url) {
const buf = [];
const outputStream = new PassThrough();

const instance = autocannon({
url,
connections: 100,
duration: 10,
pipelining: 1,
});

autocannon.track(instance, { outputStream });

outputStream.on('data', data => buf.push(data));
instance.on('done', () => {
process.stdout.write(Buffer.concat(buf));
});
}

console.log('Running benchmark for root endpoint...');
run('http://localhost:3000/');

setTimeout(() => {
console.log('\nRunning benchmark for users API...');
run('http://localhost:3000/api/users');
}, 11000);

setTimeout(() => {
console.log('\nRunning benchmark for CPU-intensive API...');
run('http://localhost:3000/api/cpu-intensive');
}, 22000);

To run this benchmark:

bash
# First, install dependencies
npm install express autocannon

# Start your Express app
node app.js

# In a separate terminal, run the benchmark
node benchmark.js

Best Practices for Benchmarking

  1. Test in an environment similar to production: Benchmarking on your development machine may not accurately represent production performance.

  2. Isolate your tests: Make sure no other resource-intensive applications are running during benchmarking.

  3. Run multiple benchmark iterations: Performance can vary between runs, so take an average of multiple runs.

  4. Test different endpoints: Different routes may have different performance characteristics.

  5. Simulate realistic load patterns: Test scenarios that closely match your expected traffic patterns.

  6. Monitor system resources: Use tools like top, htop, or Node.js's built-in monitoring capabilities.

  7. Start with a baseline: Before making optimizations, establish baseline performance metrics.

Real-world Benchmarking Example

Let's implement a more realistic benchmarking scenario where we compare different ways to handle a common task in Express: parsing and validating request data.

First, let's create an Express application with different endpoints that implement the same functionality in different ways:

javascript
// app-compare.js
const express = require('express');
const bodyParser = require('body-parser');
const Joi = require('joi');
const app = express();

// Middleware
app.use(bodyParser.json());

// User schema for validation
const userSchema = Joi.object({
name: Joi.string().min(3).max(30).required(),
email: Joi.string().email().required(),
age: Joi.number().integer().min(18).max(120)
});

// Implementation 1: Manual validation
app.post('/api/users/manual', (req, res) => {
const { name, email, age } = req.body;

// Manual validation
const errors = [];
if (!name || name.length < 3 || name.length > 30) {
errors.push('Invalid name');
}

if (!email || !email.match(/^[^\s@]+@[^\s@]+\.[^\s@]+$/)) {
errors.push('Invalid email');
}

if (!age || age < 18 || age > 120 || !Number.isInteger(age)) {
errors.push('Invalid age');
}

if (errors.length > 0) {
return res.status(400).json({ errors });
}

// Process the validated data
return res.status(201).json({ name, email, age });
});

// Implementation 2: Using Joi
app.post('/api/users/joi', (req, res) => {
const { error, value } = userSchema.validate(req.body);

if (error) {
return res.status(400).json({ errors: error.details.map(detail => detail.message) });
}

// Process the validated data
return res.status(201).json(value);
});

const PORT = process.env.PORT || 3000;
app.listen(PORT, () => {
console.log(`Server running on port ${PORT}`);
});

Now, let's create a benchmark script to compare these two implementations:

javascript
// benchmark-compare.js
const autocannon = require('autocannon');

const validUser = {
name: 'John Doe',
email: '[email protected]',
age: 30
};

function run(title, url, payload) {
console.log(`Running benchmark for ${title}...`);

return autocannon({
url,
connections: 100,
duration: 10,
method: 'POST',
headers: {
'Content-Type': 'application/json'
},
body: JSON.stringify(payload)
}, (err, results) => {
if (err) {
console.error(err);
return;
}

console.log(`Results for ${title}:`);
console.log(` Requests/sec: ${results.requests.average}`);
console.log(` Latency (avg): ${results.latency.average} ms`);
console.log(` Latency (max): ${results.latency.max} ms`);
console.log(` Errors: ${results.errors}`);
console.log('\n');
});
}

// Run benchmarks in sequence
async function runBenchmarks() {
await run('Manual Validation', 'http://localhost:3000/api/users/manual', validUser);
await run('Joi Validation', 'http://localhost:3000/api/users/joi', validUser);
}

runBenchmarks();

To run this comparison benchmark:

bash
# First, install dependencies
npm install express body-parser joi autocannon

# Start your Express app
node app-compare.js

# In a separate terminal, run the benchmark
node benchmark-compare.js

Analyzing Benchmark Results

After running the benchmark, you might see results like this:

Results for Manual Validation:
Requests/sec: 8962.43
Latency (avg): 11.13 ms
Latency (max): 38.67 ms
Errors: 0

Results for Joi Validation:
Requests/sec: 5124.76
Latency (avg): 19.47 ms
Latency (max): 52.21 ms
Errors: 0

In this case, we can see that the manual validation approach performs better than using the Joi validation library. This is expected since the manual approach doesn't have the overhead of loading and using a validation library. However, the Joi approach might be preferred for its better developer experience and maintainability, despite the performance trade-off.

Optimizations Based on Benchmarking Results

Once you've identified performance bottlenecks through benchmarking, you can implement various optimizations:

  1. Caching: Add response caching for frequently-accessed, rarely-changing data:
javascript
const mcache = require('memory-cache');

// Cache middleware
const cache = (duration) => {
return (req, res, next) => {
const key = '__express__' + req.originalUrl || req.url;
const cachedBody = mcache.get(key);
if (cachedBody) {
res.send(cachedBody);
return;
} else {
res.sendResponse = res.send;
res.send = (body) => {
mcache.put(key, body, duration * 1000);
res.sendResponse(body);
};
next();
}
};
};

// Use the cache middleware
app.get('/api/data', cache(30), (req, res) => {
// This expensive operation will be cached for 30 seconds
res.json({ data: generateExpensiveData() });
});
  1. Compression: Enable compression to reduce response size:
javascript
const compression = require('compression');
app.use(compression());
  1. Load Balancing: Distribute traffic across multiple instances of your application:
javascript
// cluster.js
const cluster = require('cluster');
const os = require('os');
const numCPUs = os.cpus().length;

if (cluster.isMaster) {
console.log(`Master ${process.pid} is running`);

// Fork workers
for (let i = 0; i < numCPUs; i++) {
cluster.fork();
}

cluster.on('exit', (worker) => {
console.log(`Worker ${worker.process.pid} died`);
cluster.fork(); // Replace the dead worker
});
} else {
// Workers can share any TCP connection
require('./app');
console.log(`Worker ${process.pid} started`);
}
  1. Database Query Optimization: Use techniques like indexing, query optimization, and connection pooling.

Summary

Benchmarking is an essential practice for developing high-performance Express applications. By measuring your application's performance metrics, you can identify bottlenecks, validate optimizations, and make informed decisions about your application architecture.

We've covered:

  • What benchmarking is and why it's important
  • Popular tools for benchmarking Express applications
  • How to set up and run benchmarks
  • Best practices for accurate benchmarking
  • Real-world benchmarking examples
  • Common optimizations based on benchmarking results

By making benchmarking a regular part of your development process, you can ensure your Express applications remain performant as they evolve.

Additional Resources

Exercises

  1. Benchmark an Express application with and without compression middleware.
  2. Compare the performance of different JSON parsing libraries (e.g., body-parser vs. fast-json-parse).
  3. Create a benchmark that compares the performance of different template engines (e.g., EJS, Handlebars, Pug).
  4. Benchmark the difference between synchronous and asynchronous file operations in Express.
  5. Implement a benchmark to measure the impact of adding various middleware to your Express application.


If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)