Recently we had a problem where one of the ext4 file-systems seemed unable to handle very large number of files, more than 6mln in this case
, in spite of having enough space. Is it 6mln
the max number, an ext4 file-system can have when formatted with all the default settings? I tried to Google it but didn't get any definitive answer. Anyone one out here can shade some light on this please? Cheers!!
I tried to mount a formerly readonly mounted filesystem read-writeable:
mount -o remount,rw /mountpoint
Unfortunately it did not work:
mount: /mountpoint not mounted already, or bad option
dmesg
reports:
[2570543.520449] EXT4-fs (dm-0): Couldn't remount RDWR because of unprocessed orphan inode list. Please umount/remount instead
A umount
does not work, too:
umount /mountpoint
umount: /mountpoint: device is busy.
(In some cases useful info about processes that use
the device is found by lsof(8) or fuser(1))
Unfortunately neither lsof
of fuser
don't show any process accessing something located under the mount point.
So - how can I clean up this unprocessed orphan list to be able to mount the filesystem again without rebooting the computer?
Sometimes people delete files they shouldn't, a long-running process still has the file open, and recovering the data by catting /proc/<pid>/fd/N
just isn't awesome enough. Awesome enough would be if you could "undo" the delete by running some magic option to ln that would let you re-link to the inode number (recovered through lsof).
I can't find any Linux tools to do this, least with cursory Googling.
What do you got, serverfault?
EDIT1: The reason catting the file from /proc/<pid>/fd/N
isn't awesome enough is because the process which still has the file open is still writing to it. A delete removes the reference to the inode from the filesystem namespace. What I want is a way of re-creating the reference.
EDIT2: 'debugfs ln' works but the risk is too high since it frobs raw filesystem data. The recovered file is also crazy inconsistent. The link count is zero and I can't add links to it. I'm worse off this way since I can just use /proc/<pid>/fd/N
to access the data without corrupting my fs.
I have a filesystem that has lots of small files. Currently about 80% of inodes are used (I checked with df -i
), however only 60% of disk space is used. How can I 'increase' the number of inodes? If it was just disk space, I know that I could just increase the size of the disk (this disk is on LVM). If I increase the size of the disk, will that make me have more inodes?
I'm willing to grow the filesystem this disk is on, if that'd help.
I recently installed Munin on a development web server to keep track of system usage. I've noticted that the system's inode usage is climbing by about 7-8% per day even though the disk usage has barely increased at all. I'm guessing something is writing a ton of tiny files but I can't find what / where.
I know how to find disk space usage but I can't seem to find a way to summarize inode usage.
Is there a good way to determine inode usage by directory so I can locate the source of the usage?