I could not find more information and reading the code makes me think I'm running into a compatibility issues.
Here's the story: This is in a virtualized multi-tenant environment. We've been trying to find a way to nicely downsize virtual disks if the user doesn't need much space any more. Growing them is easy, but shrinking is really hard.
Today we thought we could just leverage overprovisioning the disks and leave the virtual disk as big as it was but implement a quota. That together with trimming would cause actual usage on the backend SAN to drop and thus effectively reflect the customer's usage.
XFS does have a feature to implement a per-directory quota (which we'll try to enable for / to get what we want). ext4 has quotas only on a per-user or per-group basis so that won't work.
Enter: reserved blocks. Those are intended to leave some space free for root (or rather some specified user/group) in the case that a machine reaches a "disk full" scenario.
I toyed around with it, but I could never trigger it: even if I set the reserved block count to 50% of the disk (through tune2fs), I was able as a regular user to consume much much more (even actually fill the disk).
What I found surprising is that looking at the data that tune2fs -l
reported, was that half of the disk was unused. The internet says that this number is not
reliable as an online report, so, well. (Interestingly this number even didn't change after a clean reboot.)
I started reading around kernel options and the actual ext4 code that is in our environment and I stumbled over a code path that indicates that using the delayed block allocator might not trigger the checks for block reservations.
As it's hard to find anything reliable on this, here's more specific questions:
- Is the feature intended to still work at all?
- Would it make a good replacement for a disk-wide quota?
- Am I running into compatibility issues where reserved blocks may (undocumentedly) not be used together with other features (like maybe the delayed block allocator)?