On a virtualized server running Ubuntu 10.04, df reports the following:
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 7.4G 7.0G 0 100% /
none 498M 160K 498M 1% /dev
none 500M 0 500M 0% /dev/shm
none 500M 92K 500M 1% /var/run
none 500M 0 500M 0% /var/lock
none 500M 0 500M 0% /lib/init/rw
/dev/sda3 917G 305G 566G 36% /home
This is puzzling me for two reasons: 1.) df says that /dev/sda1, mounted at /, has a 7.4 gigabyte capacity, of which only 7.0 gigabytes are in use, yet it reports / being 100 percent full; and 2.) I can create files on / so it clearly does have space left.
Possibly relevant is that the directory /www is a symbolic link to /home/www, which is on a different partition (/dev/sda3, mounted at /home).
Can anyone offer suggestions on what might be going on here? The server appears to be working without issue, but I want to make sure there's not a problem with the partition table, file systems or something else which might result in implosion (or explosion) later.
It's possible that a process has opened a large file which has since been deleted. You'll have to kill that process to free up the space. You may be able to identify the process by using lsof. On Linux deleted yet open files are known to lsof and marked as (deleted) in lsof's output.
You can check this with
sudo lsof +L1
5% (by default) of the filesystem is reserved for cases where the filesystem fills up to prevent serious problems. Your filesystem is full. Nothing catastrophic is happening because of the 5% buffer -- root is permitted to use that safety buffer and, in your setup, non-root users have no reason to write into that filesystem.
If you have daemons that run as a non-root user but that need to manage files in that filesystem, things will break. One common such daemon is
named
. Another isntpd
.You may be out of inodes. Check inode usage with this command:
Most Linux filesystems (ext3, ext4) reserve 5% space for use only the root user.
You can see this with e.g
You can change the reserved amount using :
0
in this command stands for percent of disk size so maybe you would want to leave at least 1%.In most cases the server will appear to continue working fine - assuming all processes are being run as 'root'.
In addition to already suggested causes, in some cases it could be also following:
du -md 1
again. Fix situation by moving hidden folder to some other place or mount on different place.I had this problem and was baffled by the fact deleting various large files did not improve the situation (didn't know about the 5% buffer) anyway following some clues here
From root walked down the largest directories revealed by repetitively doing:-
until I came a directory for webserver log files which had some absolutely massive logs
which I truncated with
suddenly df -h was down to 48% used!
df -h
is rounding the values. Even the percentages are rounded. Omit the-h
and you see finer grained differences.Oh. And ext3 and derivates reserve a percentage (default 5%) for the file-system for exactly this problematic constellation. If your root filesystem would be really full (0 byte remaining) you can't boot the system. So the reserved portion prevents this.
If you are running out of space on
/dev/shm
and wondering why (given that actual used space (df -shc /dev/shm
) is much smaller then/dev/shm
allotted size)?lsof
can help:First file is consuming ~7.9GB, second about 12.7GB etc. The regex picks up on anything 1GB and over. You can tune the regex as needed. The cause could be that an otherwise dead process is holding on to a file.
df -h
will not show the issue;508K, yet...
You can see the 90G <> 46G offset. It's in the files above.
Then, just kill the PID (kill -9 PID) listed in the second column of the output above.
Result:
Great, space cleared.
The reason for doing things this way and not just something like
sudo lsof +L1 | grep '(deleted)' | grep 'dev/shm' | awk '{print $2}' | sudo xargs kill -9
is that the underlaying process(es) may still be working. If you're confident it/they is/are not, that command is a potential alternative depending on your scenario. It will kill all processes which have 'deleted' files open.I did a big update of several libraries and there was a lot of unnecessary libraries and temporal files so I free space in the "/" folder using:
And empty your trash
check the /lost+found, I had a system (centos 7) and some of file in the /lost+found ate up all the space