This was discussed on the freebsd-questions list in November 2008. To quote from Erik Trulsson:
Each i-node on the disk contains a
field telling how many hard-links
point to that inode. This field is a
(signed) 16-bit value, meaning the
maximum number of hardlinks allowed is
32767. Each subdirectory created contains a hardlink ('..') to its
parent, thus limiting the number of
subdirectories to a single directory
to less than 32767.
Note that this does not limit the
number of files you can have in a
single directory, since normal files
do not contain hardlinks to the parent
directory, but there are of course
limits to the total number of files
and directories you can have on a
single filesystem based on how many
inodes were created when the
filesystem was first created.
I did some things with freebsd 6.x that involved large numbers of files (50,000+) and don't remember any specific limitations. A quick check on a 7.2 system shows the limit to be well over 100,000. The process is still running as of now, when it fails I'll let you know what the hard limit is for 7.2, which is likely similar to 6.x.
That said, you see a huge performance hit over about 30,000 directory entries when creating new files / directories. At that point people start creating files named HashOfName/name instead of just name so you make lookups easier.
Your question is already answered, so just little performance tip:
If you have a lot of small files you should increase vfs.ufs.dirhash_maxmem, default 2MB is too small for thousands of files.
The total inodes are the limits of total files you can put in a directory.
The total inodes are created when you format your hard drive. You can make more inode by using small size of segments. See man newfs for detail.
The vfs.ufs.dirhash_maxmem is for the memory used to hold the directory names in a directory. This only affects the perfomance, nothing limitations. If you have more memory, make it bigger, otherwise do not bother it.
This was discussed on the freebsd-questions list in November 2008. To quote from Erik Trulsson:
(Full message, start of thread)
These are theoretical limits; as discussed above, you will start to run into performance problems well before you hit any limits.
I did some things with freebsd 6.x that involved large numbers of files (50,000+) and don't remember any specific limitations. A quick check on a 7.2 system shows the limit to be well over 100,000. The process is still running as of now, when it fails I'll let you know what the hard limit is for 7.2, which is likely similar to 6.x.
That said, you see a huge performance hit over about 30,000 directory entries when creating new files / directories. At that point people start creating files named HashOfName/name instead of just name so you make lookups easier.
I would expect that number to vary based on the type of filesystem involved as well.
Your question is already answered, so just little performance tip: If you have a lot of small files you should increase vfs.ufs.dirhash_maxmem, default 2MB is too small for thousands of files.
I have such line in my /etc/sysctl.conf
You can read about dirhash here (UFS improvements @ BSDCON) and here (Wiki)
The total inodes are the limits of total files you can put in a directory. The total inodes are created when you format your hard drive. You can make more inode by using small size of segments. See man newfs for detail.
The vfs.ufs.dirhash_maxmem is for the memory used to hold the directory names in a directory. This only affects the perfomance, nothing limitations. If you have more memory, make it bigger, otherwise do not bother it.