Performance Monitoring
Introduction
Performance monitoring is a critical aspect of database administration that involves tracking, measuring, and analyzing the performance of database systems. It helps identify bottlenecks, predict potential issues, and ensure optimal database operation. For beginners, understanding how to monitor database performance is essential as it forms the foundation for maintaining efficient and reliable database systems.
In this guide, we'll explore the fundamentals of database performance monitoring, the key metrics to track, tools you can use, and practical approaches to identify and resolve performance issues.
Why Performance Monitoring Matters
Database performance directly impacts the user experience and the efficiency of applications that rely on the database. Poor performance can lead to:
- Slow application response times
- System timeouts and failures
- Increased operational costs
- Poor user experience
- Lost productivity and revenue
Implementing proper performance monitoring practices helps prevent these issues and ensures your database systems operate efficiently.
Key Performance Metrics to Monitor
1. Query Performance
Response Time
Response time measures how long it takes for the database to respond to a request.
-- Example query to check response time in PostgreSQL
SELECT query, calls, total_time, mean_time
FROM pg_stat_statements
ORDER BY mean_time DESC
LIMIT 10;
Output:
query | calls | total_time | mean_time
-----------------------------------+-------+------------+-----------
SELECT * FROM large_table | 1250 | 3520.4 | 2.82
UPDATE users SET last_login = $1 | 7890 | 4560.1 | 0.58
INSERT INTO logs VALUES ($1, $2) | 45600 | 12340.2 | 0.27
... | ... | ... | ...
Query Execution Plans
Execution plans show how the database executes queries and helps identify inefficient operations.
-- Example EXPLAIN query in MySQL
EXPLAIN SELECT * FROM customers
WHERE last_purchase_date > '2023-01-01'
AND region_id = 5;
Output:
+----+-------------+-----------+------------+------+---------------+------+---------+------+------+----------+-------------+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+----+-------------+-----------+------------+------+---------------+------+---------+------+------+----------+-------------+
| 1 | SIMPLE | customers | NULL | ALL | NULL | NULL | NULL | NULL | 1000 | 11.11 | Using where |
+----+-------------+-----------+------------+------+---------------+------+---------+------+------+----------+-------------+
2. Resource Utilization
CPU Usage
High CPU usage might indicate inefficient queries or insufficient resources.
-- Oracle query to check CPU usage by session
SELECT ss.username,
se.SID,
ss.value/100 as CPU_USED_SECONDS
FROM v$sesstat ss, v$statname sn, v$session se
WHERE ss.statistic# = sn.statistic#
AND se.sid = ss.sid
AND sn.name = 'CPU used by this session'
ORDER BY ss.value DESC;
Memory Usage
Monitor memory structures like buffer cache, shared pool, and PGA/SGA areas.
-- SQL Server memory usage query
SELECT
object_name,
counter_name,
cntr_value/1024.0 as Value_MB
FROM sys.dm_os_performance_counters
WHERE counter_name IN (
'Total Server Memory (KB)',
'Target Server Memory (KB)',
'Buffer cache hit ratio',
'Page life expectancy'
);
Disk I/O
I/O operations often become bottlenecks in database systems.
-- MySQL I/O statistics query
SELECT file_name,
count_read,
count_write,
sum_number_of_bytes_read/1024/1024 as read_MB,
sum_number_of_bytes_write/1024/1024 as write_MB
FROM performance_schema.file_summary_by_instance
WHERE file_name LIKE '%ibdata%' OR file_name LIKE '%.ibd';
3. Connection Metrics
Active Connections
Too many connections can overload the database server.
-- PostgreSQL active connections query
SELECT count(*) as active_connections
FROM pg_stat_activity
WHERE state = 'active';
Connection Pool Utilization
For applications using connection pools, monitor utilization rates.
// Example Java code for monitoring HikariCP connection pool
import com.zaxxer.hikari.HikariPoolMXBean;
import javax.management.JMX;
import javax.management.MBeanServer;
import javax.management.ObjectName;
import java.lang.management.ManagementFactory;
public class ConnectionPoolMonitor {
public static void monitorHikariCP(String poolName) {
try {
MBeanServer mBeanServer = ManagementFactory.getPlatformMBeanServer();
ObjectName poolObjectName = new ObjectName("com.zaxxer.hikari:type=Pool (" + poolName + ")");
HikariPoolMXBean poolProxy = JMX.newMXBeanProxy(mBeanServer, poolObjectName, HikariPoolMXBean.class);
System.out.println("Active Connections: " + poolProxy.getActiveConnections());
System.out.println("Idle Connections: " + poolProxy.getIdleConnections());
System.out.println("Total Connections: " + poolProxy.getTotalConnections());
System.out.println("Awaiting Connections: " + poolProxy.getThreadsAwaitingConnection());
} catch (Exception e) {
e.printStackTrace();
}
}
}
4. Transaction Metrics
Transaction Throughput
Measures how many transactions your database can process per second.
-- MySQL transaction per second query
SHOW GLOBAL STATUS LIKE 'Com_commit';
-- Run again after 1 second and calculate the difference
Transaction Response Time
Average time to complete a transaction.
-- PostgreSQL transaction time query
SELECT datname,
xact_commit,
xact_rollback,
blks_read,
blks_hit,
tup_returned,
tup_fetched
FROM pg_stat_database
WHERE datname = 'your_database';
5. Wait Events
Wait events help identify what resources or operations are causing delays.
-- Oracle top wait events query
SELECT event,
total_waits,
time_waited,
average_wait
FROM v$system_event
ORDER BY time_waited DESC;
Performance Monitoring Tools
Built-in Database Tools
Most database systems come with built-in monitoring tools:
- MySQL: Performance Schema, Information Schema
- PostgreSQL: pg_stat views, pg_stat_statements
- SQL Server: Dynamic Management Views (DMVs), SQL Server Profiler
- Oracle: Automatic Workload Repository (AWR), Active Session History (ASH)
Example: Using PostgreSQL pg_stat_statements
First, enable the extension:
-- Enable pg_stat_statements extension
CREATE EXTENSION IF NOT EXISTS pg_stat_statements;
Then query for slow queries:
-- Find top 10 slowest queries
SELECT substring(query, 1, 50) as short_query,
round(total_time::numeric, 2) as total_time,
calls,
round(mean_time::numeric, 2) as mean_time,
round((100 * total_time / sum(total_time) OVER ())::numeric, 2) as percentage_cpu
FROM pg_stat_statements
ORDER BY total_time DESC
LIMIT 10;
Third-Party Monitoring Tools
Several third-party tools offer more comprehensive monitoring capabilities:
- Prometheus + Grafana: Open-source monitoring and visualization
- DataDog: Cloud monitoring with database-specific dashboards
- New Relic: Application performance monitoring including database insights
- SolarWinds Database Performance Analyzer: Specialized for database performance
Custom Monitoring Scripts
You can develop custom scripts for specific monitoring needs:
import psycopg2
import time
from datetime import datetime
def monitor_postgres_connections():
conn = psycopg2.connect("dbname=yourdb user=youruser password=yourpass host=localhost")
cursor = conn.cursor()
while True:
cursor.execute("""
SELECT count(*) FROM pg_stat_activity WHERE state = 'active';
""")
active_connections = cursor.fetchone()[0]
cursor.execute("""
SELECT count(*) FROM pg_stat_activity WHERE state = 'idle';
""")
idle_connections = cursor.fetchone()[0]
print(f"{datetime.now()} - Active: {active_connections}, Idle: {idle_connections}")
time.sleep(60) # Check every minute
cursor.close()
conn.close()
if __name__ == "__main__":
monitor_postgres_connections()
Setting Up a Monitoring Dashboard
A monitoring dashboard provides a visual representation of your database performance metrics. Here's how to set up a basic monitoring system using Prometheus and Grafana:
-
Set up the database exporter
For PostgreSQL:
bash# Download and run postgres_exporter
wget https://github.com/prometheus-community/postgres_exporter/releases/download/v0.10.0/postgres_exporter-0.10.0.linux-amd64.tar.gz
tar xvfz postgres_exporter-*.tar.gz
cd postgres_exporter-*
# Set environment variables
export DATA_SOURCE_NAME="postgresql://username:password@localhost:5432/database?sslmode=disable"
# Run the exporter
./postgres_exporter -
Configure Prometheus to scrape metrics
yaml# prometheus.yml
scrape_configs:
- job_name: 'postgres'
static_configs:
- targets: ['localhost:9187'] -
Set up Grafana dashboard
- Install and run Grafana
- Add Prometheus as a data source
- Import a PostgreSQL dashboard template or create your own
Performance Monitoring Workflow
A comprehensive performance monitoring workflow includes:
1. Collecting Baseline Metrics
Establish normal performance patterns during different time periods:
-- PostgreSQL query to gather baseline metrics
SELECT
datname,
now() as sample_time,
numbackends as connections,
xact_commit as commits,
xact_rollback as rollbacks,
blks_read as disk_reads,
blks_hit as cache_hits,
tup_returned as rows_returned,
tup_fetched as rows_fetched,
tup_inserted as rows_inserted,
tup_updated as rows_updated,
tup_deleted as rows_deleted
FROM pg_stat_database
WHERE datname IS NOT NULL;
Run this query at regular intervals (e.g., every hour) and store the results for future reference.
2. Setting Alert Thresholds
Once you've established baselines, set thresholds for alerts:
- CPU Usage: Alert if sustained usage > 80% for 5 minutes
- Memory: Alert if buffer cache hit ratio < 95%
- Disk I/O: Alert if average I/O wait time > 20ms
- Connections: Alert if > 80% of max_connections are used
3. Regular Health Checks
Implement daily or weekly health checks:
import psycopg2
import smtplib
from email.message import EmailMessage
def database_health_check():
conn = psycopg2.connect("dbname=yourdb user=youruser password=yourpass host=localhost")
cursor = conn.cursor()
issues = []
# Check for long-running queries
cursor.execute("""
SELECT pid, now() - query_start as duration, query
FROM pg_stat_activity
WHERE state = 'active'
AND now() - query_start > interval '5 minutes'
ORDER BY duration DESC;
""")
long_running_queries = cursor.fetchall()
if long_running_queries:
issues.append(f"Found {len(long_running_queries)} long-running queries")
# Check disk space
cursor.execute("""
SELECT tablespace_name,
pg_size_pretty(pg_tablespace_size(tablespace_name)) as size
FROM pg_tablespace;
""")
tablespaces = cursor.fetchall()
# Add logic to check if tablespaces are too large
# Check connection count
cursor.execute("SELECT count(*) FROM pg_stat_activity;")
conn_count = cursor.fetchone()[0]
cursor.execute("SHOW max_connections;")
max_conn = cursor.fetchone()[0]
conn_percentage = (conn_count / int(max_conn)) * 100
if conn_percentage > 80:
issues.append(f"High connection usage: {conn_percentage:.2f}%")
# Send report if issues found
if issues:
send_alert_email(issues)
cursor.close()
conn.close()
def send_alert_email(issues):
msg = EmailMessage()
msg['Subject'] = "Database Health Check Alert"
msg['From'] = "[email protected]"
msg['To'] = "[email protected]"
msg.set_content("
".join(issues))
smtp_server = smtplib.SMTP('smtp.example.com', 587)
smtp_server.starttls()
smtp_server.login('username', 'password')
smtp_server.send_message(msg)
smtp_server.quit()
if __name__ == "__main__":
database_health_check()
Troubleshooting Common Performance Issues
1. Slow Queries
Identifying Slow Queries
-- MySQL slow query identification
SELECT event_name, count_star, sum_timer_wait/1000000000 as sum_timer_wait_ms
FROM performance_schema.events_statements_summary_by_digest
ORDER BY sum_timer_wait DESC
LIMIT 10;
Solution: Query Optimization
-- Before: Slow query without index
SELECT * FROM orders
WHERE customer_id = 12345
AND order_date BETWEEN '2023-01-01' AND '2023-12-31';
-- Add appropriate index
CREATE INDEX idx_customer_order_date ON orders(customer_id, order_date);
-- After: Same query now uses the index
SELECT * FROM orders
WHERE customer_id = 12345
AND order_date BETWEEN '2023-01-01' AND '2023-12-31';
2. High CPU Usage
Identifying CPU-intensive operations
-- PostgreSQL query to find CPU-intensive queries
SELECT pid, usename, query_start, state, query
FROM pg_stat_activity
WHERE state = 'active'
ORDER BY query_start ASC;
Solution: Query Rewriting and Resource Limits
-- Before: CPU-intensive query with Cartesian product
SELECT customers.name, orders.order_id
FROM customers, orders, order_items
WHERE customers.customer_id = orders.customer_id
AND order_items.product_id > 1000;
-- After: Optimized query with proper joins
SELECT DISTINCT c.name, o.order_id
FROM customers c
JOIN orders o ON c.customer_id = o.customer_id
JOIN order_items oi ON o.order_id = oi.order_id
WHERE oi.product_id > 1000;
3. Memory Issues
Identifying Memory Problems
-- MySQL memory usage query
SHOW GLOBAL VARIABLES LIKE 'innodb_buffer_pool_size';
SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_reads';
SHOW GLOBAL STATUS LIKE 'Innodb_buffer_pool_read_requests';
Calculate buffer pool hit ratio:
Hit Ratio = (read_requests - reads) / read_requests * 100
Solution: Memory Configuration Tuning
-- Increase InnoDB buffer pool size (MySQL)
SET GLOBAL innodb_buffer_pool_size = 4294967296; -- 4GB
-- For PostgreSQL, in postgresql.conf:
-- shared_buffers = 2GB
-- effective_cache_size = 6GB
4. Locking and Blocking
Identifying Locks
-- SQL Server blocking query
SELECT blocked.session_id as blocked_session,
blocking.session_id as blocking_session,
blocked.wait_time / 1000.0 as wait_time_seconds,
blocked.wait_type,
blocked_t.text as blocked_query,
blocking_t.text as blocking_query
FROM sys.dm_exec_requests blocked
JOIN sys.dm_exec_requests blocking ON blocking.session_id = blocked.blocking_session_id
CROSS APPLY sys.dm_exec_sql_text(blocked.sql_handle) blocked_t
CROSS APPLY sys.dm_exec_sql_text(blocking.sql_handle) blocking_t;
Solution: Transaction Management
-- Before: Long-running transaction
BEGIN TRANSACTION;
UPDATE large_table SET status = 'Processed' WHERE id > 1000000;
-- Long processing time here, blocking other transactions
COMMIT;
-- After: Chunked updates to reduce lock time
DO $$
DECLARE
start_id INTEGER := 1000000;
end_id INTEGER := 1100000;
chunk_size INTEGER := 10000;
BEGIN
WHILE start_id < end_id LOOP
UPDATE large_table
SET status = 'Processed'
WHERE id BETWEEN start_id AND start_id + chunk_size;
COMMIT;
start_id := start_id + chunk_size;
END LOOP;
END $$;
Best Practices for Database Performance Monitoring
1. Implement Proactive Monitoring
Don't wait for users to report issues before checking performance.
# Sample Python script for proactive query monitoring
import psycopg2
import time
import statistics
def monitor_query_performance(query, iterations=10):
conn = psycopg2.connect("dbname=yourdb user=youruser password=yourpass host=localhost")
cursor = conn.cursor()
execution_times = []
for i in range(iterations):
start_time = time.time()
cursor.execute(query)
cursor.fetchall() # Actually consume the results
end_time = time.time()
execution_times.append(end_time - start_time)
time.sleep(1) # Pause between executions
avg_time = sum(execution_times) / len(execution_times)
median_time = statistics.median(execution_times)
min_time = min(execution_times)
max_time = max(execution_times)
print(f"Query Performance Stats:")
print(f" Average time: {avg_time:.4f} seconds")
print(f" Median time: {median_time:.4f} seconds")
print(f" Min time: {min_time:.4f} seconds")
print(f" Max time: {max_time:.4f} seconds")
cursor.close()
conn.close()
return {
'avg': avg_time,
'median': median_time,
'min': min_time,
'max': max_time
}
# Example usage
critical_query = "SELECT * FROM users JOIN orders ON users.id = orders.user_id WHERE orders.status = 'processing'"
stats = monitor_query_performance(critical_query)
2. Document Performance Baselines
Keep records of normal performance patterns for your database.
# Performance Baseline Documentation
## Database: Production OLTP (PostgreSQL 14)
Date: 2023-09-15
### Query Response Times
- Customer lookup: 5-15ms
- Order processing: 50-100ms
- Report generation: 1.5-3s
### Resource Utilization
- Peak CPU: 65-75% (during business hours)
- Average CPU: 30-40%
- Memory utilization: 70-85% of allocated
- Disk I/O: 200-400 IOPS average, peaks at 1200 IOPS
### Connection Patterns
- Weekday business hours: 150-200 connections
- Weekday off-hours: 30-50 connections
- Weekend: 20-40 connections
### Transaction Volume
- Peak: ~500 transactions/second (2PM-4PM weekdays)
- Average: ~200 transactions/second (business hours)
- Off-hours: ~50 transactions/second
3. Implement Automated Responses
For certain common issues, set up automated remediation:
# Automated script to kill long-running queries
import psycopg2
def terminate_long_queries():
conn = psycopg2.connect("dbname=yourdb user=youruser password=yourpass host=localhost")
conn.autocommit = True
cursor = conn.cursor()
# Find queries running more than 30 minutes, excluding maintenance jobs
cursor.execute("""
SELECT pid, usename, query_start, query
FROM pg_stat_activity
WHERE state = 'active'
AND now() - query_start > interval '30 minutes'
AND query NOT ILIKE '%vacuum%'
AND query NOT ILIKE '%analyze%'
AND query NOT ILIKE '%maintenance%';
""")
long_running_queries = cursor.fetchall()
for pid, username, start_time, query in long_running_queries:
# Log the termination
print(f"Terminating query (PID: {pid}) by {username} running since {start_time}")
print(f"Query: {query[:100]}...")
# Terminate the query
try:
cursor.execute(f"SELECT pg_terminate_backend({pid});")
print(f"Successfully terminated PID {pid}")
except Exception as e:
print(f"Error terminating PID {pid}: {e}")
cursor.close()
conn.close()
if __name__ == "__main__":
terminate_long_queries()
4. Regular Performance Reviews
Schedule weekly or monthly database performance reviews.
# Generate a monthly performance report
import psycopg2
import matplotlib.pyplot as plt
import pandas as pd
from datetime import datetime, timedelta
def generate_monthly_report():
conn = psycopg2.connect("dbname=metrics user=youruser password=yourpass host=localhost")
# Assuming you have a metrics table storing historical performance data
query = """
SELECT
sample_date,
avg_query_time,
max_query_time,
avg_connections,
max_connections,
cpu_usage,
memory_usage
FROM performance_metrics
WHERE sample_date >= %s AND sample_date < %s
ORDER BY sample_date;
"""
last_month_start = datetime.now().replace(day=1) - timedelta(days=1)
last_month_start = last_month_start.replace(day=1)
this_month_start = datetime.now().replace(day=1)
df = pd.read_sql(query, conn, params=(last_month_start, this_month_start))
# Generate visualizations
plt.figure(figsize=(12, 8))
# Query time trends
plt.subplot(2, 2, 1)
plt.plot(df['sample_date'], df['avg_query_time'], label='Avg Query Time (ms)')
plt.plot(df['sample_date'], df['max_query_time'], label='Max Query Time (ms)')
plt.title('Query Performance')
plt.legend()
# Connection trends
plt.subplot(2, 2, 2)
plt.plot(df['sample_date'], df['avg_connections'], label='Avg Connections')
plt.plot(df['sample_date'], df['max_connections'], label='Max Connections')
plt.title('Connection Usage')
plt.legend()
# Resource usage
plt.subplot(2, 2, 3)
plt.plot(df['sample_date'], df['cpu_usage'], label='CPU Usage (%)')
plt.title('CPU Utilization')
plt.subplot(2, 2, 4)
plt.plot(df['sample_date'], df['memory_usage'], label='Memory Usage (%)')
plt.title('Memory Utilization')
plt.tight_layout()
plt.savefig(f'database_performance_{last_month_start.strftime("%Y_%m")}.png')
# Close the connection
conn.close()
print(f"Report generated: database_performance_{last_month_start.strftime('%Y_%m')}.png")
if __name__ == "__main__":
generate_monthly_report()
Advanced Performance Monitoring Techniques
Workload Analysis
Categorize database operations to understand usage patterns.
-- PostgreSQL workload analysis by operation type
SELECT
substring(query, 1, 30) as query_sample,
CASE
WHEN query ILIKE 'SELECT%' THEN 'SELECT'
WHEN query ILIKE 'INSERT%' THEN 'INSERT'
WHEN query ILIKE 'UPDATE%' THEN 'UPDATE'
WHEN query ILIKE 'DELETE%' THEN 'DELETE'
ELSE 'OTHER'
END as operation_type,
count(*) as frequency,
sum(calls) as total_calls,
round(avg(total_time)::numeric, 2) as avg_total_time,
round(avg(mean_time)::numeric, 2) as avg_mean_time
FROM pg_stat_statements
GROUP BY operation_type, query_sample
ORDER BY operation_type, avg_total_time DESC;
Predictive Performance Monitoring
Use historical data to predict future performance issues.
import pandas as pd
import numpy as np
from sklearn.linear_model import LinearRegression
import matplotlib.pyplot as plt
from datetime import datetime, timedelta
# Assuming you have a DataFrame with historical metrics
def predict_resource_usage(metrics_df, days_to_predict=30):
# Prepare the data
metrics_df['date_num'] = (metrics_df['sample_date'] - metrics_df['sample_date'].min()).dt.days
# Create and train the model for CPU prediction
X = metrics_df[['date_num']]
y_cpu = metrics_df['cpu_usage']
model_cpu = LinearRegression()
model_cpu.fit(X, y_cpu)
# Generate future dates
last_date = metrics_df['sample_date'].max()
last_date_num = metrics_df['date_num'].max()
future_dates = [last_date + timedelta(days=i) for i in range(1, days_to_predict + 1)]
future_date_nums = [last_date_num + i for i in range(1, days_to_predict + 1)]
# Predict future CPU usage
future_cpu = model_cpu.predict(np.array(future_date_nums).reshape(-1, 1))
# Create a DataFrame with predictions
predictions_df = pd.DataFrame({
'sample_date': future_dates,
'predicted_cpu': future_cpu
})
# Visualize actual vs predicted
plt.figure(figsize=(12, 6))
plt.plot(metrics_df['sample_date'], metrics_df['cpu_usage'], label='Historical CPU Usage')
plt.plot(predictions_df['sample_date'], predictions_df['predicted_cpu'], label='Predicted CPU Usage', linestyle='--')
plt.title('CPU Usage Trend and Prediction')
plt.xlabel('Date')
plt.ylabel('CPU Usage (%)')
plt.legend()
plt.savefig('cpu_prediction.png')
# Check for potential capacity issues
if max(future_cpu) > 90:
print("WARNING: CPU usage predicted to exceed 90% within the next month!")
critical_date = predictions_df[predictions_df['predicted_cpu'] > 90]['sample_date'].min()
print(f"Estimated date for reaching critical threshold: {critical_date.strftime('%Y-%m-%d')}")
return predictions_df
# Example usage
# metrics_df = pd.read_csv('historical_metrics.csv', parse_dates=['sample_date'])
# predictions = predict_resource_usage(metrics_df)
Summary
Database performance monitoring is a critical skill for any database administrator. Key takeaways from this guide include:
-
Understand key metrics: Response time, resource utilization, connections, transactions, and wait events are fundamental to database performance.
-
Use appropriate tools: Leverage built-in database tools, third-party solutions, and custom scripts to gain comprehensive performance insights.
-
Establish a monitoring workflow: Set baselines, configure alerts, perform regular monitoring, and follow a structured approach to troubleshooting.
-
Apply best practices: Implement proactive monitoring, document baselines, automate responses, and conduct regular performance reviews.
-
Plan for growth: Use predictive analysis to anticipate performance issues before they affect users.
By implementing effective performance monitoring practices, you'll ensure your database systems remain reliable, efficient, and capable of supporting your application's needs.
Further Learning
Exercises
- Set up basic performance monitoring for a database of your choice using built-in tools.
- Write a script to collect and store key performance metrics every hour for a week.
- Create a simple dashboard to visualize the collected metrics.
- Identify the top 5 slowest queries in your database and optimize at least one of them.
- Simulate a high-load scenario and observe how your database performs under stress.
Additional Resources
- PostgreSQL documentation on Monitoring Database Activity
- MySQL documentation on Performance Schema
- SQL Server documentation on Performance Monitoring and Tuning Tools
- Oracle documentation on Performance Tuning Guide
- Book: "Database Performance Tuning and Optimization" by Sitansu S. Mittra
- Book: "SQL Performance Explained" by Markus Winand
If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)