I have a master and hot standby setup with PostgreSQL 9.3, and I'm attempting to monitor the state of replication on the standby using the check_postgres
tool and the "hot_standby_delay" action. This seems to work by calculating the difference in bytes between the xlog position on the master and the standby.
In numerous online examples I have seen warning and critical thresholds for this in the < 1MB range. The exact command we are using in Nagios is:
/usr/local/bin/check_postgres.pl --action=hot_standby_delay --host=$HOSTNOTES$,$HOSTADDRESS$ --port=5432 --dbname=monitoring --dbuser=monitoring --dbpass=monitoring --warning=1000000 --critical=5000000
Which should set a warning at roughly 1MB and an outage at roughly 5MB. However, on our servers we routinely see it spike to a high level, like this:
[1417719713] SERVICE ALERT: host;PostgreSQL: Replication Delay;CRITICAL;SOFT;1;POSTGRES_HOT_STANDBY_DELAY CRITICAL: DB "monitoring" (host:host.example.com) 121175880
[1417719773] SERVICE ALERT: host;PostgreSQL: Replication Delay;CRITICAL;SOFT;2;POSTGRES_HOT_STANDBY_DELAY CRITICAL: DB "monitoring" (host:host.example.com) 132780968
[1417719833] SERVICE ALERT: host;PostgreSQL: Replication Delay;CRITICAL;SOFT;3;POSTGRES_HOT_STANDBY_DELAY CRITICAL: DB "monitoring" (host:host.example.com) 21412936
Followed up on the next Nagios check with:
[1417719893] SERVICE ALERT: host;PostgreSQL: Replication Delay;OK;SOFT;4;POSTGRES_HOT_STANDBY_DELAY OK: DB "monitoring" (host:host.example.com) 0
So in the general sense it seems that replication is working (and indeed, performing a data update on the master sees immediate results on the standby).
Unfortunately this scenario makes the monitoring useless since it is triggering a false positive many times a day. From what I've found between the documentation and other examples of using this, this result is not typical, and most people are able to set a threshold of 1MB or less and only see errors when there are in fact errors.
Does anybody have any idea of what I could try with the configuration to remedy this? On this particular install we have changed only a few parameters, and of those, only wal_keep_segments
seems even remotely related (and we have that set to 128).
Both master and standby are hosted in EC2 in the same availability zone and there don't seem to be any communication delays between them. This is also a very low-traffic database so I am uncertain as to how the xlog delta could be that far off to begin with, unless I am missing some very critical fact.
A check that returns SOFT CRITICAL does not trigger notifications, as it has not reached the
max_check_attempts
threshold. This is not a false positive; it is Nagios working as designed. This is quite normal (for many services, not just for your case). It's exactly why max_check_attempts exists.In your case, it's returning to normal within 3 minutes of the initial non-OK check result. For some services, that amount of time out-of-sync is acceptable, but it might not be for your use case. I don't know enough about Postgres replication to say definitively if it's indicative of an underlying problem or not.