I know that size of a directory itself is a different thing than the sizes of the files in it.
I think of a directory as a list of the directories and files within, hence a directory's size should be related to the number of files/directories within, with a minimum of 4096 bytes because of the block-size constraint.
But in my system, I have many well populated directories with 4096 bytes, and some directories which are considerably less populated that are around 10 megabytes. Can you please explain me why this happens?
The filesystem used, and the maximum number of entities in that directory at any point in time determines the size. Once the default size reserved for the directory is exhausted, more space is allocated for the directory. However, when the number of entities goes down, the space allocated isn't automatically freed. So, if, at one point, a directory has 10 million entities, it would remain the same size if 9,999,999 of them were deleted. This leads to interesting situations like in this Unix & Linux post.
This doesn't necessarily hold for other filesystems. The
ext{2,3,4}
filesystems are affected. NTFS is, to a less extent.tmpfs
and btrfs aren't. ZFS is rather conservative, I got an output of2
for the secondstat
command.Related: