I'm running a Linux instance on EC2 (I have MongoDB and node.js installed) and I'm getting this error:
Cannot write: No space left on device
I think I've tracked it down to this file, here is the df output
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda1 1032088 1032088 0 100% /
The problem is, I don't know what this file is and I also don't know if this file is even the problem.
So my question is: How do I fix the "No space left on device" error?
That file,
/
is your root directory. If it's the only filesystem you see indf
, then it's everything. You have a 1GB filesystem and it's 100% full. You can start to figure out how it's used like this:You can then replace
/
with the paths that are taking up the most space. (They'll be at the end, thanks to thesort
. The command may take awhile.)I know i am replying in this thread after nearly 5 years but it might help someone, I had the same problem, i had m4.xlarge instance df -h told that the /dev/xvda1 was full, - 100%
i tried to solve it here are the steps
Helped me to know that it was the docker container that was talking all my space so i push all my container to my docker registry then did sudo rm -rf /var/lib/docker/ it cleared up my space :) hope it helps someone :)
If you are running an EBS boot instance (recommended) then you can increase the size of the root (/) volume using the procedure I describe in this article:
If you are running an instance-store instance (not recommended) then you cannot change the size of the root disk. You either have to delete files or move files to ephemeral storage (e.g., /mnt) or attach EBS volumes and move files there.
Here's an article I wrote that describes how to move a MySQL database from the root disk to an EBS volume:
...and consider moving to EBS boot instances. There are many reasons why you'll thank yourself later.
I have just solved that problem by running this command:
sudo apt autoremove
and a lot of old packages were removed, freeing up 5 gigabytes, for instance there was many packages like this "linux-aws-headers-4.4.0-1028"
I've recently run into this issue on Amazon Linux. My crontab outbound email queue
/var/spool/clientmqueue
was 4.5GB.I solved it by:
sudo find / -type f -size +10M -exec ls -lh {} \;
/bin/rm -f <path-to-large-file>
Problem solved!
Paulo was on the right track for me, but when I tried to run
it responded:
First, I had to run
That cleared just enough space for me to run 'sudo apt autoremove', and that took me from 100% full on /dev/xvda1 to 28%.
It could be coming from Jenkins or Docker. To solve that, you should clean Jenkings logs and set it's size.
Hope this helps to those who are using codedeploy agent and having a similar issue.
I was using Amazon Linux EC2 instance and my directory was 100% full. first, to run the command I deleted all files /var/log/journal/.
then run this command.
sudo du -xhc /
and found that out of 8GB, codedeploy-agent/deployment-root folder was using 5.1GB space.By default codedeploy-agent store last 5 archive so i changed :max_revision from 5 to 2 in /etc/codedeploy-agent/conf/codedeployagent.yml
Use
du -hs * | sort -rh | head -5
to check top 5 usage, thenrm -rf name
to remove junk like large size log file or archived within logs folder