I have a SCSI disk in a server (hardware Raid 1), 32G, ext3 filesytem. df
tells me that the disk is 100% full. If I delete 1G this is correctly shown.
However, if I run a du -h -x /
then du
tells me that only 12G are used (I use -x
because of some Samba mounts).
So my question is not about subtle differences between the du and df commands but about how I can find out what causes this huge difference?
I rebooted the machine for a fsck that went w/out errors. Should I run badblocks
? lsof
shows me no open deleted files, lost+found
is empty and there is no obvious warn/err/fail statement in the messages file.
Feel free to ask for further details of the setup.
Just stumbled on this page when trying to track down an issue on a local server.
In my case the
df -h
anddu -sh
mismatched by about 50% of the hard disk size.This was caused by apache (httpd) keeping large log files in memory which had been deleted from disk.
This was tracked down by running
lsof | grep "/var" | grep deleted
where/var
was the partition I needed to clean up.The output showed lines like this:
httpd 32617 nobody 106w REG 9,4 1835222944 688166 /var/log/apache/awstats_log (deleted)
The situation was then resolved by restarting apache (
service httpd restart
), and cleared up 2gb of disk space, by allowing the locks on deleted files to be cleared.Check for files on located under mount points. Frequently if you mount a directory (say a sambafs) onto a filesystem that already had a file or directories under it, you lose the ability to see those files, but they're still consuming space on the underlying disk. I've had file copies while in single user mode dump files into directories that I couldn't see except in single usermode (due to other directory systems being mounted on top of them).
I agree with OldTroll's answer as the most probable cause for your "missing" space.
On Linux you can easily remount the whole root partition (or any other partition for that matter) to another place in you filesystem say,
/mnt
, for example, just issue athen you can do a
and see what is using up your space.
In my case this had to do with large deleted files. It was fairly painful to solve before I found this page, which set me on the correct path.
I finally solved the problem by using
lsof | grep deleted
, which showed me which program was holding two very large log files (totalling 5GB of my available 8GB root partition).See what
df -i
says. It could be that you are out of inodes, which might happen if there are a large number of small files in that filesystem, which uses up all the available inodes without consuming all the available space.Files that are open by a program do not actually go away (stop consuming disk space) when you delete them, they go away when the program closes them. A program might have a huge temporary file that you (and du) can't see. If it's a zombie program, you might need to reboot to clear those files.
For me, I needed to run
sudo du
as there were a large amount of docker files under/var/lib/docker
that a non-sudo user doesn't have permission to read.Try this to see if a dead/hung process is locked while still writing to the disk: lsof | grep "/mnt"
Then try killing off any PIDs which are stuck (especially look for lines ending in "(deleted"))
This is the easiest method I have found to date to find large files!
Here is a example if your root mount is full / (mount /root) Example:
cd / (so you are in root)
ls | xargs du -hs
Example Output:
then you would notice that store is large do a cd /store
and run again
ls | xargs du -hs
in this case the vms directory is the space hog.
One more possibility to consider - you are almost guaranteed to see a big discrepancy if you are using docker, and you run df/du inside a container that is using volume mounts. In the case of a directory mounted to a volume on the docker host, df will report the HOST's df totals. This is obvious if you think about it, but when you get a report of a "runaway container filling the disk!", make sure you verify the container's filespace consumption with something like
du -hs <dir>
.