Metrics & Monitoring
Two data sources provide a complete picture of your database cluster health.
Data Sources
| Source | What it provides | How it works |
|---|---|---|
| Kubernetes metrics-server | Pod CPU and memory usage | Reads kubelet cgroups stats via metricsClient.PodMetricses() |
| CNPG Prometheus exporter | Connections, DB size, WAL, backup timestamps | Queries port 9187 on each pod via remotecommand exec |
Both are queried on every /metrics/current call — no Prometheus Operator required.
Cluster Overview
curl http://localhost:24005/api/provision/my-db/metrics/current
{
"metricsAvailable": true,
"healthStatus": "HEALTHY",
"cpuUsageCores": 0.057,
"cpuLimitCores": 6,
"cpuUsagePercent": 0.9,
"memoryUsageMB": 257,
"memoryLimitMB": 12288,
"memoryUsagePercent": 2.1,
"storageLimit": "50Gi",
"instanceCount": 3,
"activeConnections": 5,
"maxConnections": 100,
"lastBackupTime": "2026-03-26T14:20:40Z"
}
Limits come from the tier configuration. Percentages are calculated as usage / (limit * instanceCount) * 100.
Per-Pod Breakdown
Every response includes a pods array with per-pod CPU and memory:
{
"pods": [
{
"name": "my-db-postgres-1",
"role": "primary",
"cpuCores": 0.024,
"cpuLimitCores": 2,
"memoryMB": 93,
"memoryLimitMB": 4096
},
{
"name": "my-db-postgres-2",
"role": "replica",
"cpuCores": 0.018,
"cpuLimitCores": 2,
"memoryMB": 97,
"memoryLimitMB": 4096
}
]
}
This data comes from Kubernetes metrics-server — the same source kubectl top pods uses.
CNPG Metrics (Port 9187)
The CloudNativePG operator runs a Prometheus exporter on port 9187 in every pod. We query it via client-go remotecommand (no kubectl):
| Metric | Source |
|---|---|
activeConnections | cnpg_backends_total (summed across all pods) |
idleConnections | cnpg_backends_total filtered by state="idle" |
maxConnections | cnpg_pg_settings_setting{name="max_connections"} |
databaseSizeGB | cnpg_pg_database_size_bytes{datname="app"} |
lastBackupTime | cnpg_collector_last_available_backup_timestamp |
For multi-instance tiers (STANDARD/ENTERPRISE), connections are summed across all pods — primary and replicas.
PodMonitor
STANDARD and ENTERPRISE tiers set enablePodMonitor: true in the CNPG Cluster CRD. This creates a PodMonitor resource that Prometheus Operator (if installed) uses to auto-discover scrape targets.
FREE tier keeps enablePodMonitor: false to avoid creating CRDs for an operator that may not be installed.
Metrics History
curl "http://localhost:24005/api/provision/my-db/metrics/history?limit=30"
History is stored on disk at provisioning-data/projects/{projectId}/metrics-history.json. Up to 100 data points are kept in memory.
SSE Streaming
curl http://localhost:24005/api/provision/my-db/metrics/stream
Server-Sent Events stream that emits a metrics snapshot every 5 seconds. Uses http.Flusher — no WebSocket required.
When Metrics Are Unavailable
If the pod is unreachable or Prometheus exporter isn't running:
{
"metricsAvailable": false,
"unavailableReason": "exec: pods \"my-db-postgres-1\" not found"
}
The frontend shows a helpful gate UI explaining how to enable metrics.