I have an EC2 node running Ubuntu 14.04. On a deploy this morning, I received the following error message from git fetch:
error: unable to create temporary file: No space left on device
I logged into the server and df -h indicates I have plenty of space:
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 492M 12K 492M 1% /dev
tmpfs 100M 488K 99M 1% /run
/dev/xvda1 7.8G 4.9G 2.5G 67% /
none 4.0K 0 4.0K 0% /sys/fs/cgroup
none 5.0M 0 5.0M 0% /run/lock
none 497M 4.0K 497M 1% /run/shm
none 100M 0 100M 0% /run/user
Am I misreading df here? My understanding has been that /tmp on EC2 is resident on /dev/xvda1, but maybe I'm wrong?
Verify system inodes:
If there is approaching 100% usage, try to using
To find out what is occupying the inodes.
There is, sadly enough, no way to increase the number of inodes on a file system once the file system has been created.
Except LVM, which can be expanding the number of inodes with resize2fs
Referred to:No space left on device while there is plenty of space available
It could be some application is creating huge number of small files and completely exhausting the inodes. You may look for such rogue application and delete the unwanted files.
The inode limit can't be increased dynamically, however if you are using LVM you may think of increasing the size of the volume otherwise take a backup and create a new filesystem specifying higher inode limit.
You can use find command to search for directory which has bulk files and you can perform clean up based on your standards
find /dev/xvda1 -type f -size +1M -exec ls -ltrh {} \;
and when you find the place or directory where you need to perform the clean up pass that directory path in source and you can perform the clean upIn the above command I am removing the files which are older than 120 from a directory called dummy