I have a RHEL 5 server that recently ran out of disk space and now our Logwatch for the server reports the following disk usage (I think this is the last accurate night before the /var partition filled up):
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
62G 3.8G 55G 7% /
/dev/mapper/VolGroup01-LogVol00
198G 185G 2.8G 99% /var
/dev/cciss/c0d0p1 99M 24M 70M 26% /boot
If I log into the server and run df -h manually I get the following result:
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00
62G 14G 46G 23% /
/dev/mapper/VolGroup01-LogVol00
198G 174G 14G 93% /var
/dev/cciss/c0d0p1 99M 24M 70M 26% /boot
I checked /usr/share/logwatch/default.conf/logwatch.conf
and found that the temp directory is /var/cache/logwatch
but that directory contains no items. Does anyone know what would cause logwatch to display stale data like this?
Data is obviously skewed. Run logwatch manually, or run your "comparison" at the exact same time the system runs its own.
@Tim asked the question that brought me down this path so I'm giving him credit for the correct answer.
The problem wasn't that the data was skewed but that there were a couple processes running that caused the used disk space to fluctuate wildly. This server is running six instances of Moodle that had staggered backups running throughout the night. Some of the backups were failing to complete and they didn't clean up their temporary files. It appears that another process comes along later and cleans up the temporary files and it happened somewhere between when the logwatch ran (4 AM) and when I check it manually (8 AM).