Skip to main content

Database Monitoring with Prometheus

Introduction

Database performance is critical to application health. Slow queries, connection problems, or storage issues can cascade into system-wide failures. Effective database monitoring helps detect problems early, optimize performance, and ensure reliability.

Prometheus excels at database monitoring through its time-series data collection, powerful querying capabilities, and integration with various database exporters. This guide will show you how to implement Prometheus monitoring for your databases, regardless of which database system you're using.

Why Monitor Databases with Prometheus?

Databases require special attention in your monitoring strategy because they:

  • Store your application's most valuable asset: data
  • Often become bottlenecks in system performance
  • Can experience various failure modes (connectivity issues, query performance degradation, storage problems)
  • Need capacity planning to accommodate growth

Prometheus offers several advantages for database monitoring:

  1. Single monitoring platform - Monitor databases alongside your other infrastructure
  2. Historical data - Track long-term trends to inform capacity planning
  3. Powerful query language (PromQL) - Create custom metrics and alerts
  4. Rich ecosystem of exporters - Ready-made solutions for most database systems
  5. Integration with visualization tools - Create comprehensive dashboards with Grafana

Getting Started with Database Monitoring

To monitor databases with Prometheus, you'll typically follow this workflow:

Let's walk through each step.

Database Exporters

Prometheus follows a pull-based model, so you'll need an exporter that exposes database metrics in a format Prometheus can scrape. Here are some popular database exporters:

Database SystemExporterDescription
MySQL/MariaDBmysqld_exporterCollects MySQL server metrics
PostgreSQLpostgres_exporterExposes PostgreSQL metrics
MongoDBmongodb_exporterGathers MongoDB server statistics
Redisredis_exporterExposes Redis metrics
Elasticsearchelasticsearch_exporterCollects Elasticsearch metrics

Example: Setting Up MySQL Monitoring

Let's walk through a practical example of monitoring a MySQL database with Prometheus.

1. Install the MySQL Exporter

First, download and install the MySQL exporter:

bash
# Download the mysqld_exporter
wget https://github.com/prometheus/mysqld_exporter/releases/download/v0.14.0/mysqld_exporter-0.14.0.linux-amd64.tar.gz

# Extract the archive
tar xvfz mysqld_exporter-0.14.0.linux-amd64.tar.gz

# Move the binary to a suitable location
sudo mv mysqld_exporter-0.14.0.linux-amd64/mysqld_exporter /usr/local/bin/

2. Create a MySQL User for Monitoring

The exporter needs a MySQL user with appropriate permissions:

sql
CREATE USER 'exporter'@'localhost' IDENTIFIED BY 'password' WITH MAX_USER_CONNECTIONS 3;
GRANT PROCESS, REPLICATION CLIENT, SELECT ON *.* TO 'exporter'@'localhost';
FLUSH PRIVILEGES;

3. Configure the MySQL Exporter

Create a configuration file for the exporter:

bash
# Create .my.cnf file
cat > ~/.my.cnf << EOF
[client]
user=exporter
password=password
EOF

# Set proper permissions
chmod 600 ~/.my.cnf

4. Run the MySQL Exporter

Start the exporter as a service:

bash
# Create a systemd service file
sudo tee /etc/systemd/system/mysqld_exporter.service << EOF
[Unit]
Description=Prometheus MySQL Exporter
After=network.target mysqld.service

[Service]
User=prometheus
ExecStart=/usr/local/bin/mysqld_exporter --config.my-cnf=~/.my.cnf

[Install]
WantedBy=multi-user.target
EOF

# Start and enable the service
sudo systemctl daemon-reload
sudo systemctl start mysqld_exporter
sudo systemctl enable mysqld_exporter

5. Configure Prometheus to Scrape MySQL Metrics

Add a new job to your prometheus.yml configuration:

yaml
scrape_configs:
- job_name: 'mysql'
static_configs:
- targets: ['localhost:9104']
relabel_configs:
- source_labels: [__address__]
target_label: instance
regex: '([^:]+)(:[0-9]+)?'
replacement: '${1}'

6. Restart Prometheus to Apply Changes

bash
sudo systemctl restart prometheus

Key MySQL Metrics to Monitor

Once set up, you can track these essential MySQL metrics:

Connection Metrics

  • mysql_global_status_threads_connected - Current connections
  • mysql_global_status_max_used_connections - Peak connection usage
  • mysql_global_variables_max_connections - Connection limit

Query Performance

  • mysql_global_status_queries - Query rate
  • mysql_global_status_slow_queries - Slow query rate
  • mysql_global_status_questions - Question operations rate

Buffer Pool Metrics

  • mysql_global_status_buffer_pool_pages_total - Total buffer pool pages
  • mysql_global_status_buffer_pool_pages_free - Free buffer pool pages
  • rate(mysql_global_status_innodb_buffer_pool_reads[5m]) - Buffer pool read rate

Example PromQL Queries

Here are some useful PromQL queries for MySQL monitoring:

Connection Utilization Percentage:

(mysql_global_status_threads_connected / mysql_global_variables_max_connections) * 100

Slow Query Rate:

rate(mysql_global_status_slow_queries[5m])

Buffer Pool Hit Ratio:

(1 - (rate(mysql_global_status_innodb_buffer_pool_reads[5m]) / 
rate(mysql_global_status_innodb_buffer_pool_read_requests[5m]))) * 100

Example: PostgreSQL Monitoring

PostgreSQL monitoring follows a similar pattern. Here's how to set it up:

1. Install the PostgreSQL Exporter

bash
# Download the postgres_exporter
wget https://github.com/prometheus-community/postgres_exporter/releases/download/v0.11.1/postgres_exporter-0.11.1.linux-amd64.tar.gz

# Extract and install
tar xvfz postgres_exporter-0.11.1.linux-amd64.tar.gz
sudo mv postgres_exporter-0.11.1.linux-amd64/postgres_exporter /usr/local/bin/

2. Create a PostgreSQL User for Monitoring

sql
CREATE USER postgres_exporter WITH PASSWORD 'password';
ALTER USER postgres_exporter SET SEARCH_PATH TO postgres_exporter,pg_catalog;

-- Grant permissions
GRANT pg_monitor TO postgres_exporter;

3. Configure and Run the PostgreSQL Exporter

bash
# Create environment variable file
cat > /etc/default/postgres_exporter << EOF
DATA_SOURCE_NAME="postgresql://postgres_exporter:password@localhost:5432/postgres?sslmode=disable"
EOF

# Create systemd service
sudo tee /etc/systemd/system/postgres_exporter.service << EOF
[Unit]
Description=Prometheus PostgreSQL Exporter
After=network.target postgresql.service

[Service]
User=prometheus
EnvironmentFile=/etc/default/postgres_exporter
ExecStart=/usr/local/bin/postgres_exporter

[Install]
WantedBy=multi-user.target
EOF

# Start and enable
sudo systemctl daemon-reload
sudo systemctl start postgres_exporter
sudo systemctl enable postgres_exporter

4. Configure Prometheus

Add this to your prometheus.yml:

yaml
scrape_configs:
- job_name: 'postgresql'
static_configs:
- targets: ['localhost:9187']

Advanced Database Monitoring Techniques

Custom Metrics

Both MySQL and PostgreSQL exporters allow you to define custom metrics. For example, with the PostgreSQL exporter, you can create a custom queries file:

yaml
# custom-queries.yaml
pg_replication:
query: "SELECT EXTRACT(EPOCH FROM (now() - pg_last_xact_replay_timestamp())) as lag"
metrics:
- lag:
usage: "GAUGE"
description: "Replication lag in seconds"

Then reference this file when starting the exporter:

bash
postgres_exporter --extend.query-path=/path/to/custom-queries.yaml

Monitoring Database Query Performance

To monitor specific query performance, consider using tools like:

  • pg_stat_statements (PostgreSQL)
  • performance_schema (MySQL)

These can be integrated with Prometheus exporters to collect detailed query metrics.

Setting Up Alerts

Once you have metrics flowing, set up alerts for database issues:

yaml
# prometheus/alerts.yml
groups:
- name: DatabaseAlerts
rules:
- alert: MySQLHighConnectionUsage
expr: mysql_global_status_threads_connected / mysql_global_variables_max_connections * 100 > 80
for: 5m
labels:
severity: warning
annotations:
summary: "MySQL high connection usage (instance {{ $labels.instance }})"
description: "MySQL connection usage is above 80% for 5 minutes
VALUE = {{ $value }}%
LABELS: {{ $labels }}"

- alert: PostgreSQLHighCPU
expr: rate(process_cpu_seconds_total{job="postgresql"}[1m]) > 0.8
for: 5m
labels:
severity: warning
annotations:
summary: "PostgreSQL high CPU usage (instance {{ $labels.instance }})"
description: "PostgreSQL is using a high amount of CPU
VALUE = {{ $value }}%
LABELS: {{ $labels }}"

Visualizing Database Metrics with Grafana

Create comprehensive database dashboards in Grafana by importing pre-made templates:

For a custom dashboard, you might include panels like:

  • Connection utilization
  • Query rates
  • Buffer/cache hit ratios
  • Transaction rates
  • Replication lag
  • Storage usage
  • Index usage

Common Issues and Troubleshooting

Exporter Can't Connect to Database

If your exporter can't connect:

  1. Verify database credentials
  2. Check network connectivity
  3. Ensure database service is running
  4. Examine database logs for connection rejection reasons

Missing or Incomplete Metrics

For missing metrics:

  1. Verify exporter is running (systemctl status exporter_name)
  2. Check exporter logs for errors
  3. Confirm Prometheus configuration is correct
  4. Test direct connection to exporter (curl http://localhost:port/metrics)

High Cardinality Issues

Database monitoring can generate metrics with high cardinality (e.g., per-table or per-query metrics). To manage this:

  1. Use label filters in your scrape configuration
  2. Implement metric aggregation where appropriate
  3. Consider using recording rules for frequently used queries

Best Practices for Database Monitoring

  1. Monitor at multiple levels - Track host-level metrics (CPU, memory, disk), database instance metrics, and query-level metrics
  2. Set meaningful thresholds - Base alert thresholds on historical patterns and business requirements
  3. Correlate metrics - Look for relationships between metrics to identify root causes
  4. Keep historical data - Retain long-term metrics for capacity planning
  5. Monitor query performance - Track slow queries and query patterns
  6. Add context with labels - Use labels to differentiate between environments, clusters, and database instances

Summary

Monitoring databases with Prometheus provides deep insights into database performance and health. By installing appropriate exporters, configuring Prometheus to scrape metrics, and visualizing data with Grafana, you can build a comprehensive database monitoring solution.

Remember that effective database monitoring combines:

  • Collection - Gathering the right metrics
  • Visualization - Creating informative dashboards
  • Alerting - Setting up timely notifications
  • Analysis - Understanding the patterns and relationships

With these elements in place, you'll be well-positioned to ensure your databases remain healthy, performant, and reliable.

Additional Resources

Exercises

  1. Set up MySQL monitoring with Prometheus on a test database and create alerts for high connection usage.
  2. Create a custom PostgreSQL metric that tracks the number of active connections per database.
  3. Build a Grafana dashboard that shows the correlation between database query rate and CPU usage.
  4. Configure alerts for replication lag in a master-slave database setup.
  5. Monitor query performance by integrating with performance_schema (MySQL) or pg_stat_statements (PostgreSQL).


If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)