Recently the /var and /home folders have been clogging up on my server and I received a 'critical error' email saying that the /home folder was 100% full. I managed to fix it using the LVExtend command but I was just wondering how I can find out what was clogging the directories up? Are there logs of some sort?
I have a dedicated server w/ root access so I have full control. What can I do to prevent this from happening again as it's already happened twice?
Thanks,
James
/home is usually user files. /var has mysql, mail, log files etc
Do you have space limits for accounts in cpanel?
Ran these commands to see what is using the most space:
du -h -s /home/*
du -h -s /var/*
Disks get full because files get written to them. To find what's using the space in a partition,
du
(for "Disk Usage") is the tool. I like to use:Because it limits itself to the partition of interest (
-x
) and doesn't give you excessive verbosity. If you just want a sorted list of space hogs, you can add|sort -h
to the end of that command.Once you've got that info, you can drill down into excessively-large looking directories to see where the usage is happening (
/home/foo
,/home/foo/suspicious
, etc etc). What counts as "excessively-large looking" is a judgement you, as the admin, need to make based on the expected usage of your server.Finding who/what is responsible for creating the files you consider to be excessive can be a little tricky. Using
ls -l
will show the ownership of the file(s), which is the first piece of info. If it's a regular user, then the problem could be cron jobs, a webapp the user is running, or a manually-invoked local command. That's something you'll have to discover for yourself, as it's your box. If the files are owned by root or another system user, then it's a system process and you get to hunt the culprit down (you should know what's running on your system and what it does, so it should be fairly easy).As far as preventing it from happening again, if it's a user causing the problems, your options are:
If it's a system process:
Based on your comment on Hawk's answer, the problem may be a capacity planning issue. If you're giving users enough space that they're storing 110GB of data and only using 4% of their quota, you need bigger disks (much, much bigger disks). If you're relying on overcommit to make money, sooner or later you're going to get bitten in the arse.
There are no logs tracking creation / changes to every file on the system.
du (and sort?) will give you stats on the size of files, but it'll be a huge list - have a look at 'find' to locate files which have been changed recently.