I am getting the following error on the odd occasion on our web server:
Warning: session_start(): open(/tmp/sess_7ifl201pvdd91rr6tr4015n5k4, O_RDWR) failed: No space left on device (28) in /home/some/script/on/my/site/script.php on line ##
My server admin tells me that it must be something on the site that is causing this such as someone uploading a large file via PHP. But i'm a bit sceptical as it's only started happening the past couple of days.
So, I was wondering if anyone else knew of reasons why this might occur? I am running lightspeed (not apache), centos. Let me know if there is any more information required.
If the problem is large uploads from clients, you can mitigate this a little by setting the LimitRequestBody option (see http://httpd.apache.org/docs/2.2/mod/core.html#LimitRequestBody - litespeed claims to be drop-in compatible with Apache so it should work there too, if not you'll need to check their docs). This will only limit single requests though - a large number of uploads at the same time could still potentially fill temporary storage.
How big is the volume that holds
/tmp
, and is it specifically for/tmp
or shared with other areas? It would be helpful to add the output ofdf -h
to your question. If you don't have a separate filesystem for/tmp
then something elsewhere coudl be filling up the volume.I would suggest adding some monitoring to the server. I use collectd to keep an eye on things and a slightly altered version of this script to produce pretty pictures from the recorded data - there are a couple of other popular options that would do the same job too. You can ask it to monitor filesystem space used+free (using this module) then you'll see if your errors correspond to a period when
/tmp
filled.A filesystem can be "full" when there is plenty of space if it runs out of inodes. If this is the case then you could recreate the filesystem with a larger number of those. This is only usually a problem when you have a great many small files as the defaults when creating a filesystem are usually fine (an example of where the number of inodes needs tweaking is mail servers where each mail is stored as a separate file).
As a quick and dirty monitoring solution you could have a cron job that fires every minute and runs:
That will leave you with a file listing the space and inodes free on the /tmp filesystem at each minute of the day for the last 24 hours, which you can use to see if either space of inodes is the issue when you next get the error. The file will be about 425Kbytes long. This is far from efficient, so should not be used as a permanent solution, and will miss issues caused by
/tmp
simply being far too small such that it can completely fill in one minute (the resolution of the check). You could make the script more fancy and have it rundate; du -shc /tmp/* > /home/tmpusewhennearfull
if the space or inodes available on /tmp are below a certain point (50% for instance) then you have a chance of seeing what is consuming the space if something is temporarily.Note: I'm assuming that this is a server or VM dedicated to your use, if you are on a shared server then you have much more limited options regarding installing monitoring software and so forth.
Running out of inodes may also be the problem. Look at
df -i
and see if the partition where /tmp resides is close to 100%.