This is a fairly new Graphite installation on CentOS 6.5. It's receiving metrics via AMQP (RabbitMQ) from Sensu.
As a proof-of-concept, I have a load-average monitor reporting data to sensu/graphite every 60 seconds. I can see the data arriving in Graphite's listener log.
If I select some data for graphing, I can only see data if my period is within "the past 30 minutes." (Then it's only visible if I set the line mode to "connected line" because the points disappear.)
If I set the period to "View past 31 minutes" all of the data disappears from the graph.
I've tried playing with the storage-schemas.conf but haven't made any appreciable change in this behavior.
If I go over 30 minutes, is the data somehow being thrown away, filtered out? What would I check?
This is storage-schemas.conf:
[load_averages]
pattern = \.load_avg\.
retentions = 10s:14d,1m:90d
I'm pretty sure I understand what's going on here now.
As I suspected, it has to do with the sampling rate of the metric and and the sampling rate expected by the Whisper database.
The key is the storage-schemas.conf file which specifies the sampling rate to be stored.
I configured graphite using the echocat/graphite puppet module. This sets up a default frequency of 1s for the first 30 minutes, 1m for the first 1d, and 5m for 2 years.
The load average metric I was trying to graph had a sampling period of 1m or 60s. So the whisper database would store 59 nulls and one value each minute. When requesting more than 30 minutes, graphite discarded the real data.
Two things have to happen:
First, change the initial sampling rate in storage-schemas.conf (using puppet) so the storage bins match the sampling frequency.
Second, the existing Whisper database files have to be either thrown away or resized.
Whisper has a utility (whisper-resize.py), but in my case I had no valuable data to keep. I wiped the affected Whisper DB files and let them be recreated.