Best PostgreSQL Monitoring Tools in 2026
February 4, 2026 · Ghazi
What to Monitor in PostgreSQL
Before diving into specific tools, it helps to know what you should actually be watching. PostgreSQL exposes a wealth of metrics through its statistics collector and system catalog. The most important areas to monitor include:
- Query performance: Slow queries, query frequency, execution plans, and total time spent per query
- Connections: Active, idle, and waiting connections relative to your
max_connectionslimit - Replication lag: How far replicas are behind the primary, measured in bytes or time
- Cache hit ratio: The percentage of reads served from shared buffers versus disk — ideally above 99%
- Table bloat and vacuum activity: Dead tuples accumulating, autovacuum runs, and table sizes growing unexpectedly
- Lock contention: Queries waiting on locks, deadlocks, and long-running transactions holding locks
- WAL generation: Write-ahead log volume, which affects disk I/O and replication bandwidth
- Disk and memory usage: Tablespace sizes, temporary file creation, and shared buffer utilization
1. PostgreSQL Built-in Statistics Views
PostgreSQL ships with a comprehensive set of statistics views that require no additional installation. These views are the foundation that most external monitoring tools build on. If you are comfortable writing SQL, you can get surprisingly far with just these.
Key Views
pg_stat_activity— Shows all current connections, their state, the query they are running, and how long they've been activepg_stat_user_tables— Table-level statistics including sequential scans, index scans, inserts, updates, deletes, and dead tuplespg_stat_bgwriter— Background writer and checkpoint activity, useful for tuning checkpoint settingspg_stat_replication— Replication status for each standby, including write, flush, and replay lagpg_locks— All current locks held or awaited, essential for debugging lock contentionpg_stat_io— I/O statistics by backend type, added in PostgreSQL 16
Example: Check Cache Hit Ratio
SELECT
sum(heap_blks_hit) / nullif(sum(heap_blks_hit) + sum(heap_blks_read), 0) AS cache_hit_ratio
FROM pg_statio_user_tables;Example: Find Long-Running Queries
SELECT
pid,
now() - pg_stat_activity.query_start AS duration,
query,
state
FROM pg_stat_activity
WHERE state != 'idle'
AND now() - pg_stat_activity.query_start > interval '5 minutes'
ORDER BY duration DESC;Best For
Ad-hoc debugging and quick health checks. The built-in views are always available, have zero overhead to set up, and are the first place to look when something goes wrong. They are also useful for building custom monitoring scripts or dashboards.
2. pg_stat_statements
pg_stat_statements is an official PostgreSQL extension that tracks execution statistics for all SQL statements. It is arguably the single most important monitoring extension you can enable. Once loaded, it records how many times each query has been called, total and average execution time, rows returned, and buffer usage.
Key Features
- Tracks cumulative execution statistics for every normalized query
- Shows total time, mean time, min/max time, calls, rows, and buffer hits per query
- Normalizes queries by replacing literal values with parameters, so
SELECT * FROM users WHERE id = 1andSELECT * FROM users WHERE id = 2are grouped together - Low overhead — safe to run in production (recommended by the PostgreSQL community)
- Added
temp_blks_read,temp_blks_written, and JIT statistics in recent versions
Setup
-- Add to postgresql.conf
shared_preload_libraries = 'pg_stat_statements'
-- Then create the extension
CREATE EXTENSION pg_stat_statements;Example: Top 10 Slowest Queries by Total Time
SELECT
queryid,
calls,
round(total_exec_time::numeric, 2) AS total_time_ms,
round(mean_exec_time::numeric, 2) AS avg_time_ms,
rows,
query
FROM pg_stat_statements
ORDER BY total_exec_time DESC
LIMIT 10;Best For
Every PostgreSQL deployment. There is almost no reason not to enable pg_stat_statements. It is the most efficient way to identify which queries are consuming the most resources and should be the starting point for any performance investigation.
3. pgBadger
pgBadger is a fast, open-source PostgreSQL log analyzer written in Perl. It parses your PostgreSQL log files and generates detailed HTML reports with charts covering query performance, error rates, connection patterns, checkpoint activity, and more.
Key Features
- Generates beautiful, self-contained HTML reports from log files
- Analyzes slow queries, most frequent queries, error distribution, and lock statistics
- Supports multiple log formats (stderr, csvlog, syslog, jsonlog)
- Incremental mode for processing logs continuously
- Can generate reports for specific time ranges
- Completely offline — runs against log files, not the live database
Setup
# Configure PostgreSQL logging for pgBadger
log_min_duration_statement = 0 # Log all queries (or set a threshold)
log_checkpoints = on
log_connections = on
log_disconnections = on
log_lock_waits = on
log_line_prefix = '%t [%p]: user=%u,db=%d,app=%a,client=%h '
# Generate a report
pgbadger /var/log/postgresql/postgresql.log -o report.htmlPricing
Free and open source. Available on GitHub under the PostgreSQL license.
Best For
Post-incident analysis and periodic performance reviews. pgBadger is excellent for understanding what happened during a specific time window. It is not a real-time monitoring tool, but its reports are incredibly detailed and useful for spotting patterns you would miss with live dashboards alone.
4. Prometheus + postgres_exporter
Prometheus is the industry-standard open-source monitoring and alerting toolkit, and postgres_exporter is the community exporter that collects PostgreSQL metrics and exposes them in Prometheus format. Pair this with Grafana for dashboards and you have a powerful, fully open-source monitoring stack.
Key Features
- Exposes 100+ PostgreSQL metrics including connections, replication lag, table statistics, and buffer usage
- Custom query support — define your own SQL queries and expose them as Prometheus metrics
- Grafana dashboards available out of the box (community dashboards on Grafana.com)
- Flexible alerting through Prometheus Alertmanager (PagerDuty, Slack, email, webhooks)
- Multi-instance monitoring — one Prometheus server can scrape dozens of PostgreSQL instances
- Long-term metric storage and historical analysis
Setup
# Run postgres_exporter
export DATA_SOURCE_NAME="postgresql://user:password@localhost:5432/postgres?sslmode=disable"
./postgres_exporter
# Add to prometheus.yml
scrape_configs:
- job_name: 'postgresql'
static_configs:
- targets: ['localhost:9187']Pricing
Free and open source. Prometheus, postgres_exporter, and Grafana are all available under open-source licenses. The cost is the infrastructure to run them and the time to set them up.
Best For
Teams that already use Prometheus and Grafana or want a fully open-source monitoring stack. This is the most common setup for self-hosted PostgreSQL monitoring in production and gives you complete control over your metrics pipeline.
5. Datadog
Datadog is a commercial observability platform that offers deep PostgreSQL integration out of the box. Its agent collects metrics from your PostgreSQL instances and provides pre-built dashboards, anomaly detection, and alerting without requiring you to manage any monitoring infrastructure.
Key Features
- Pre-built PostgreSQL dashboard with 50+ metrics out of the box
- Deep query-level monitoring with Database Monitoring (DBM) — see query plans, wait events, and blocking queries in real time
- Anomaly detection and forecasting on key metrics
- Correlate database metrics with application traces and logs
- Support for managed PostgreSQL services (RDS, Cloud SQL, Azure)
- Customizable alerts with multi-channel notifications
Pricing
Infrastructure Monitoring starts at $15/host/month. Database Monitoring is an add-on at $70/host/month. Pricing can add up quickly for larger deployments but includes support and managed infrastructure. Free trial available.
Best For
Teams that want a managed, full-stack observability platform and are willing to pay for it. Datadog is especially valuable when you need to correlate database performance with application metrics and traces across your entire stack.
6. pgwatch2
pgwatch2 is a flexible, open-source PostgreSQL monitoring tool that collects metrics using SQL queries and stores them in a time-series database. It comes with pre-built Grafana dashboards and covers a wide range of PostgreSQL internals out of the box.
Key Features
- Over 30 built-in metric collection presets covering connections, queries, table stats, replication, bloat, and more
- Easy to extend with custom SQL-based metric definitions
- Supports multiple storage backends (PostgreSQL, TimescaleDB, InfluxDB, Prometheus, JSON files)
- Web UI for managing monitored databases and metric configurations
- Docker-based deployment with a single command
- Lightweight — designed to run with minimal resource overhead
Setup
# Quick start with Docker
docker run -d --name pgwatch2 \
-p 3000:3000 -p 8080:8080 \
-e PW2_PG_HOST=host.docker.internal \
-e PW2_PG_PORT=5432 \
-e PW2_PG_USER=pgwatch2 \
-e PW2_PG_PASSWORD=secret \
cybertec/pgwatch2-postgresPricing
Free and open source under the BSD license. Commercial support is available from CYBERTEC, the maintainer company.
Best For
Teams that want a PostgreSQL-specific monitoring solution without the complexity of a general-purpose observability platform. pgwatch2 is opinionated about PostgreSQL in a good way — it knows exactly which metrics matter and collects them efficiently.
7. Percona Monitoring and Management (PMM)
Percona Monitoring and Management (PMM) is a free, open-source platform for monitoring and managing database performance. While it supports MySQL and MongoDB as well, its PostgreSQL monitoring is comprehensive and includes query analytics powered by pg_stat_statements and pg_stat_monitor.
Key Features
- Query Analytics (QAN) with detailed per-query metrics, execution plans, and examples
- Pre-built Grafana dashboards for PostgreSQL instance, database, and replication monitoring
- Integrated alerting with Alertmanager
- Advisors that provide automated recommendations for configuration and performance tuning
- Supports both self-managed and cloud-hosted PostgreSQL instances
- Based on familiar open-source tools (Prometheus, Grafana, ClickHouse)
Setup
# Install PMM Server
docker run -d --name pmm-server \
-p 443:8443 \
percona/pmm-server:latest
# Install PMM Client and add PostgreSQL
pmm-admin add postgresql \
--username=pmm_user \
--password=secret \
--host=localhost \
--port=5432 \
--service-name=my-postgresPricing
Free and open source. Percona offers commercial support subscriptions and a managed platform (Percona Portal) for teams that want additional features and enterprise support.
Best For
Teams running multiple database engines (PostgreSQL, MySQL, MongoDB) who want a unified monitoring platform. PMM's query analytics are particularly strong and rival commercial tools for identifying slow and problematic queries.
8. Cloud Provider Monitoring
If you run PostgreSQL on a managed cloud service, your provider includes built-in monitoring. These tools are tightly integrated with the platform and require no additional setup, though they vary in depth and flexibility.
AWS RDS Performance Insights
- Visualizes database load by wait event, SQL statement, host, or user
- Identifies top SQL queries by load with drill-down into execution statistics
- Free tier covers 7 days of performance history
- Integrates with CloudWatch for alerts and dashboards
Google Cloud SQL Insights
- Query-level performance diagnostics with query plans
- AI-driven recommendations for query optimization
- Integrated with Cloud Monitoring and Cloud Logging
Azure Database Intelligent Performance
- Query Performance Insight with top resource-consuming queries
- Performance recommendations for index and configuration tuning
- Integrated with Azure Monitor and Log Analytics
Best For
Teams using managed PostgreSQL who want basic to intermediate monitoring without additional tooling. Cloud provider monitoring is a good starting point, but most teams eventually supplement it with more detailed tools like pg_stat_statements or a dedicated monitoring platform.
Comparison Table
| Tool | Type | Real-time | Dashboards | Alerting | Cost |
|---|---|---|---|---|---|
| Built-in Stats Views | SQL queries | Yes | No | No | Free |
| pg_stat_statements | Extension | Yes | No | No | Free |
| pgBadger | Log analyzer | No | HTML reports | No | Free |
| Prometheus + exporter | Metrics pipeline | Yes | Grafana | Alertmanager | Free (self-hosted) |
| Datadog | SaaS platform | Yes | Built-in | Built-in | From $15/host/mo |
| pgwatch2 | Monitoring agent | Yes | Grafana | Via Grafana | Free |
| Percona PMM | Monitoring platform | Yes | Grafana | Alertmanager | Free |
| Cloud Provider | Managed service | Yes | Built-in | Built-in | Included |
Building a Monitoring Stack
Most production PostgreSQL deployments use a combination of tools rather than relying on a single solution. Here is a practical approach to building a monitoring stack:
- Start with pg_stat_statements. Enable it on every PostgreSQL instance. It has negligible overhead and gives you the most actionable data about query performance.
- Add Prometheus + Grafana for dashboards and alerting. This gives you real-time visibility and historical trends. Use postgres_exporter for standard metrics and custom queries for anything specific to your application.
- Use pgBadger for periodic deep dives. Generate weekly or post-incident reports from your logs. pgBadger catches patterns that real-time dashboards often miss.
- Consider a managed platform if you want less operational burden. Datadog or Percona PMM can replace the Prometheus + Grafana stack with less maintenance, at different cost trade-offs.
Key Metrics to Alert On
Having dashboards is one thing. Knowing what to actually alert on is another. Here are the metrics that most teams find worth waking up for:
- Replication lag exceeding threshold — If your replica is falling behind, reads may be serving stale data and failover readiness is compromised
- Connection count approaching max_connections — Running out of connections will cause application errors. Alert at 80% capacity.
- Long-running transactions — Transactions open for more than a few minutes can cause bloat, lock contention, and replication issues
- Cache hit ratio dropping below 95% — A sudden drop means the working set no longer fits in memory and disk I/O is increasing
- Dead tuples growing faster than autovacuum can clean — This leads to table bloat and degrading query performance
- WAL archiving falling behind — If WAL archiving cannot keep up, you risk running out of disk space and losing the ability to do point-in-time recovery
- Deadlocks occurring — Occasional deadlocks may be acceptable, but frequent ones indicate a design problem
How to Choose
The right monitoring setup depends on your team size, budget, and how critical your PostgreSQL instances are:
- Solo developer or small project: Enable pg_stat_statements, familiarize yourself with the built-in stats views, and run pgBadger periodically. This costs nothing and covers the essentials.
- Growing team with self-hosted Postgres: Add Prometheus + postgres_exporter + Grafana for real-time dashboards and alerting. pgwatch2 is a good alternative if you want something more PostgreSQL-focused.
- Team using managed PostgreSQL: Start with your cloud provider's built-in monitoring (Performance Insights, Cloud SQL Insights). Add pg_stat_statements if your provider supports it. Supplement with Datadog or PMM if you need deeper query analytics.
- Multiple database engines: Percona PMM or Datadog provide unified monitoring across PostgreSQL, MySQL, and MongoDB from a single platform.
No matter which monitoring tools you use, you still need a good client for connecting to your databases, running queries against stats views, and investigating issues hands-on. PostgresGUI is a lightweight, native PostgreSQL client for Mac that makes it easy to explore your database, run diagnostic queries, and understand what your monitoring data is telling you.