I'm running a database loading process (osm2pgsql) which is failing:
Processing: Node(17404k 148.8k/s) Way(1351k 6.38k/s) Relation(9520 29.94/s)way_done failed: ERROR: could not extend file "base/140667/152463": No space left on device
HINT: Check free disk space.
(7)
Arguments were: 187226311,
At the start of the import, mem
reports:
total used free shared buffers cached
Mem: 31G 29G 2.4G 0B 178M 24G
-/+ buffers/cache: 4.5G 26G
Swap: 0B 0B 0B
Shortly before the end:
total used free shared buffers cached
Mem: 31G 31G 227M 0B 178M 26G
-/+ buffers/cache: 4.8G 26G
Swap: 0B 0B 0B
Meanwhile, df
at the start:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/vda1 10309828 7879412 1997036 80% /
udev 16470572 12 16470560 1% /dev
tmpfs 6590080 260 6589820 1% /run
none 5120 0 5120 0% /run/lock
none 16475196 0 16475196 0% /run/shm
none 102400 0 102400 0% /run/user
/dev/vdb 247709760 105978300 129148548 46% /mnt
And from about 3/4 of the way through the process. Use sits at 100%.
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/vda1 10309828 9854348 22100 100% /
udev 16470572 12 16470560 1% /dev
tmpfs 6590080 260 6589820 1% /run
none 5120 0 5120 0% /run/lock
none 16475196 0 16475196 0% /run/shm
none 102400 0 102400 0% /run/user
/dev/vdb 247709760 105978300 129148548 46% /mnt
I'm unable to identify any actual files on disk that are filling up /dev/vda1:
du -h -d 3 / 2>/dev/null | grep -v ^0 > /tmp/o2p1.txt
[start import]
du -h -d 3 / 2>/dev/null | grep -v ^0 > /tmp/o2p2.txt
diff /tmp/o2p1.txt /tmp/o2p2.txt
That reveals nothing.
What's going on?
Ok, it was simple. The Postgres database was on /dev/vda1 and was getting huge. It didn't show up in the
du
command because I wasn't running that as root.I guess after the disk fills up, Postgres stops saving to disk and keeps it all in memory - until memory runs out as well.