After some google research I have figured out inode usage probably relates to the number of directorys/files.
- I am guessing there is a limit (thus what determines the inode usage %). What is this limit determined by?
- If Inode usage hits 100% what happens?
Some of the previous answers give you general ideas of what is going on, but let us find out how to do something about it. I know you're going to think this pedantic, but let's start with your filesystem type and discuss each as we've seen them under Linux. The fact that your question is tagged with Linux is important because other *nix file systems support lots of other types. I am not an expert on these individual filesystems, but I can quickly understand detailed articles enough to hit the important parts.
ext2/ext3/ext4
Oliver Diedrich wrote an article entitled "Tuning the Linux file system" that makes a couple of very important points. First, these filesystems put the inodes into a reserved area of the disk that allow them to be cached and accessed quickly. The size of this space is fixed at the time the filesystem is created. That means that in order to increase it, you are going to have to a) mirror the files onto another properly sized filesystem and have a switch over period or b) backup the filesystem, recreate it correctly, and restore the files back.
To find out how you're doing on a particular filesystem, say /dev/sda1, use the
debuge2fs -h
command and look for these lines about inodes. My filesystem has about 2.4% of my inodes in use on a 226GB filesystem with 100Gb in use (Ubuntu root disk).Others have said that when the inodes are gone, they're gone. And they aren't kidding. I found a tool called ext2resize that was eventually going to offer this feature, but the author seems to have gotten an interesting day job that takes too much of his time. See the FAQ. Don't hold your breath for ext2/3/4 to support this kind of growth as most in the kernel and filesystem community believe that the really interesting work is in solid state disk filesystems as well as clustered filesystems. The ext2/3/4 stuff is old hat, even if ext4 is fairly new.
Another interesting note is that you can try to optimize for directories containing lots of files. Given that you think you will run out of inodes, you probably have some really long directory listings.
tune2fs -O dir_index /dev/sda1
enables b-tree indexing instead of the standard linked-list model. See the man page for tune2fs or Steve's post over at debian-administration.org.If you want to remake the filesystem with more inodes, you'll have to use the
mke2fs -N ___
option to mke2fs (see the man page and usedebuge2fs
for the default, computed number. Optionally, you can change the inode-to-blocks ratio when you make the filesystem with themke2fs -i ___
option to mke2fs. My Ubuntu 10.04 distribution ships with a default ratio of 16384 or, if I read this correctly, one, 256-byte inodes for every 16384 data blocks (at 4KB each). Filesystems for newsreaders get a default ratio of 4096 to 1, which is four times as many inodes.Other Filesystems
The last three years has seen a popularity explosion in filesystems. Many are available under Linux although the popular ZFS is only available under Sun's Solaris and FreeBSD. Under Linux, you might consider exploring Btrfs, ReiserFS 3, XFS or JFS. There are also commercial filesystems which take your availability and scalability to heart, but there are so many, you'll have to look here at Wikipedia.
Conclusions
Either make your inode stripe large enough on day 1 to handle your file count or move to a filesystem that can handle the load.
The file-system uses inodes to track file locations on disk. If you don't have any more inodes, you can't write more files to the file-system until there are more available. It is best to plan what will live on the file-system before formatting it as with many file-systems, you can choose the inode default size appropriate to the size of file that will live there (i.e. larger inode size for larger files, smaller for smaller files) and thus maximize the number of files to file-system space.
high inode usage typically indicates lots of small files (or perhaps filesystem corruption).
if you run out of inodes, you will not be able to create additional files and so the disk will be 'full' regardless of its actual available capacity in MB/GB/TB
sometimes, the sysadmin may set limit to file numbers, so you cannot have huge number of small files on that kind of filesystem:)