PodMonitor Custom Resources
Introduction
When working with Prometheus in a Kubernetes environment, efficiently discovering and monitoring pods becomes crucial. The Prometheus Operator introduces several Custom Resource Definitions (CRDs) that simplify the configuration of monitoring targets. One of these CRDs is the PodMonitor
resource, which provides a declarative way to specify which pods should be monitored and how the metrics should be scraped.
In this guide, we'll explore PodMonitor
custom resources, understand their structure, and learn how to use them effectively to monitor your Kubernetes applications.
What is a PodMonitor Custom Resource?
A PodMonitor
custom resource allows you to define monitoring configurations for pods in your Kubernetes cluster. It uses label selectors to identify which pods should be monitored and specifies how Prometheus should scrape metrics from these pods.
The main advantages of using PodMonitor
resources include:
- Declarative Configuration: Define monitoring in a Kubernetes-native way
- Dynamic Discovery: Automatically detect new pods that match the specified labels
- Fine-grained Control: Customize scraping behavior per pod
- Integration with Prometheus Operator: Seamlessly works with the Prometheus Operator ecosystem
PodMonitor vs ServiceMonitor
Before diving deeper, it's important to understand the difference between PodMonitor
and ServiceMonitor
:
- ServiceMonitor: Selects and monitors Kubernetes Services
- PodMonitor: Directly selects and monitors Pods (bypassing Services)
Use PodMonitor
when:
- You want to monitor pods that don't have an associated Service
- You need pod-level metrics rather than service-level aggregation
- You want to monitor specific pods in a deployment without creating a dedicated service
PodMonitor Specification
Let's examine the structure of a PodMonitor
resource:
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: example-app
namespace: monitoring
spec:
selector:
matchLabels:
app: example-app
podMetricsEndpoints:
- port: metrics
interval: 15s
path: /metrics
namespaceSelector:
matchNames:
- default
- production
The main components are:
- selector: Defines which pods to monitor using Kubernetes label selectors
- podMetricsEndpoints: Specifies how to scrape metrics from the selected pods
- namespaceSelector: Optional field to select which namespaces to monitor
Creating Your First PodMonitor
Let's create a basic PodMonitor
resource to monitor a simple application. Our example application will be a pod that exposes Prometheus metrics on port 8080.
Step 1: Deploy a Sample Application
First, let's deploy a sample application that exposes metrics:
apiVersion: apps/v1
kind: Deployment
metadata:
name: metrics-demo
labels:
app: metrics-demo
spec:
replicas: 2
selector:
matchLabels:
app: metrics-demo
template:
metadata:
labels:
app: metrics-demo
spec:
containers:
- name: metrics-container
image: prom/prometheus:v2.35.0
ports:
- containerPort: 9090
name: metrics
args:
- --config.file=/etc/prometheus/prometheus.yml
- --storage.tsdb.path=/prometheus
- --web.console.libraries=/etc/prometheus/console_libraries
- --web.console.templates=/etc/prometheus/consoles
- --web.enable-lifecycle
Step 2: Create a PodMonitor Resource
Now, let's create a PodMonitor
resource to scrape metrics from our application:
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: metrics-demo
namespace: monitoring
spec:
selector:
matchLabels:
app: metrics-demo
podMetricsEndpoints:
- port: metrics
interval: 15s
This PodMonitor
will:
- Select pods with the label
app: metrics-demo
- Scrape metrics from the port named "metrics"
- Collect metrics every 15 seconds
Step 3: Apply the Configuration
Apply both configurations to your cluster:
kubectl apply -f deployment.yaml
kubectl apply -f podmonitor.yaml
Step 4: Verify the Configuration
Check if the PodMonitor
resource was created successfully:
kubectl get podmonitor -n monitoring
Expected output:
NAME AGE
metrics-demo 45s
Advanced PodMonitor Configuration
Let's explore some advanced configurations for PodMonitor
resources:
Multi-Port Monitoring
You can monitor multiple ports on the same pod:
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: multi-port-monitor
spec:
selector:
matchLabels:
app: multi-metric-app
podMetricsEndpoints:
- port: http-metrics
interval: 15s
- port: db-metrics
interval: 30s
path: /database/metrics
Relabeling and Filtering
You can use relabeling to modify labels before they're stored in Prometheus:
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: relabel-example
spec:
selector:
matchLabels:
app: web-app
podMetricsEndpoints:
- port: http-metrics
interval: 15s
relabelings:
- sourceLabels: [__meta_kubernetes_pod_container_name]
targetLabel: container_name
- sourceLabels: [__meta_kubernetes_pod_name]
targetLabel: pod_name
Authentication and TLS
For secure endpoints, you can configure TLS and authentication:
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: secure-metrics
spec:
selector:
matchLabels:
app: secure-app
podMetricsEndpoints:
- port: https-metrics
scheme: https
tlsConfig:
insecureSkipVerify: false
caFile: /etc/prometheus/secrets/metrics-ca/tls.crt
bearerTokenFile: /etc/prometheus/secrets/metrics-token/token
Adding Labels to Metrics
You can add custom labels to all metrics collected by a specific PodMonitor
:
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: custom-labels
spec:
selector:
matchLabels:
app: my-app
podMetricsEndpoints:
- port: metrics
interval: 15s
jobLabel: my-app-monitoring
podTargetLabels:
- environment
- region
How PodMonitor Works
To better understand how PodMonitor
works with Prometheus Operator, let's examine the workflow:
- You create a
PodMonitor
custom resource in your cluster - Prometheus Operator watches for
PodMonitor
resources - When a new
PodMonitor
is detected, the operator updates the Prometheus configuration - Prometheus begins discovering and scraping metrics from the pods that match the selector
Real-World Example: Monitoring a Database with Sidecar
Let's look at a practical example of using PodMonitor
to monitor a database with a metrics exporter sidecar:
# Database Deployment with a Metrics Exporter Sidecar
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-db
labels:
app: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:13
ports:
- containerPort: 5432
name: postgres
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secrets
key: password
- name: metrics-exporter
image: wrouesnel/postgres_exporter:v0.9.0
ports:
- containerPort: 9187
name: metrics
env:
- name: DATA_SOURCE_NAME
value: "postgresql://postgres:password@localhost:5432/postgres?sslmode=disable"
# PodMonitor for Postgres
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: postgres-monitor
namespace: monitoring
spec:
selector:
matchLabels:
app: postgres
podMetricsEndpoints:
- port: metrics
interval: 30s
metricRelabelings:
- sourceLabels: [__name__]
regex: 'pg_.*'
action: keep
This example:
- Deploys a PostgreSQL database with a sidecar metrics exporter
- Creates a
PodMonitor
that scrapes metrics from the exporter - Uses
metricRelabelings
to keep only PostgreSQL-specific metrics (those starting with "pg_")
Troubleshooting PodMonitor Issues
Common issues with PodMonitor
resources and how to solve them:
Pod Metrics Not Being Scraped
If your pod metrics aren't showing up in Prometheus, check:
-
Label Selector: Ensure the labels in your
PodMonitor
selector match your pod labels:bashkubectl get pods --show-labels
-
Port Configuration: Verify that the port name in the
podMetricsEndpoints
matches the pod's container port name:bashkubectl describe pod <pod-name>
-
Namespace Selection: Check if the
namespaceSelector
is configured correctly for your use case. -
Prometheus Operator Status: Verify the Prometheus Operator is running and processing your
PodMonitor
:bashkubectl logs -f -l app=prometheus-operator
Viewing Generated Prometheus Configuration
To see how your PodMonitor
is translated into Prometheus configuration:
kubectl get secret -n monitoring prometheus-<name>-rulefiles-0 -o json | jq -r '.data["prometheus.yaml.gz"]' | base64 -d | gunzip
Integrating with PrometheusRule
PodMonitor
works best when combined with PrometheusRule
resources for alerting:
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: pod-alerts
namespace: monitoring
spec:
groups:
- name: pod.rules
rules:
- alert: PodHighCPUUsage
expr: sum(rate(container_cpu_usage_seconds_total{pod=~"metrics-demo-.*"}[5m])) by (pod) > 0.5
for: 10m
labels:
severity: warning
annotations:
summary: High CPU usage on {{ $labels.pod }}
description: Pod {{ $labels.pod }} has been using more than 50% CPU for the last 10 minutes.
This rule creates an alert that triggers when pods monitored by our PodMonitor
exceed 50% CPU usage for more than 10 minutes.
Summary
In this guide, we've explored the PodMonitor
custom resource provided by Prometheus Operator for Kubernetes. You've learned:
- What
PodMonitor
resources are and when to use them - How to create basic and advanced
PodMonitor
configurations - Real-world examples for monitoring applications
- Troubleshooting techniques for common issues
PodMonitor
resources simplify the task of configuring Prometheus to monitor your Kubernetes workloads. By using these resources, you can adopt a GitOps approach to monitoring, where your monitoring configuration is managed alongside your application code.
Additional Resources
- Prometheus Operator Documentation
- Kubernetes Custom Resources
- Prometheus Configuration Documentation
Exercises
- Create a
PodMonitor
for an existing application in your cluster - Implement a
PodMonitor
that uses relabeling to add environment information to your metrics - Set up a
PodMonitor
with authentication for a secure endpoint - Combine a
PodMonitor
with aPrometheusRule
to create alerts for your application - Experiment with different scrape intervals and observe the effect on metric resolution
If you spot any mistakes on this website, please let me know at [email protected]. I’d greatly appreciate your feedback! :)