The Solaris ZFS Best Practices Guide recommends keeping ZFS pool utilization below 80% for best performance:
- Keep pool space under 80% utilization to maintain pool performance. Currently, pool performance can degrade when a pool is very full and file systems are updated frequently, such as on a busy mail server. Full pools might cause a performance penalty, but no other issues. If the primary workload is immutable files (write once, never remove), then you can keep a pool in the 95-96% utilization range. Keep in mind that even with mostly static content in the 95-96% range, write, read, and resilvering performance might suffer.
A common suggestion for how to implement this seems to be to make a file system or volume that is not used to store any data, but which has a size reservation of about 20% of pool capacity.
I can absolutely see, with ZFS' copy-on-write behavior, how this would help with rotational storage, because rotational storage tends to be fairly heavily IOPS-constrained so giving the file system room to make large contiguous allocations makes a lot of sense (even if they wouldn't be used as such all the time).
However, I'm not sure the 80% target makes as much sense with solid state storage, which besides being a good bit more expensive per gigabyte doesn't have anywhere near the IOPS constraints of rotational storage.
Should SSD-backed ZFS pools be restricted to less than about 80% capacity utilization for performance reasons just like HDD-backed pools, or can SSD-backed pools be allowed to fill up more without significant adverse impact on I/O performance?
I'd say yes.
My rule is to stay under 87% on SSD-only pools when using drives that haven't been heavily over-provisioned.
The SSD use case introduces the drive endurance component, while the random write latency is less of an issue that with spinning disks.
Either way, regardless of disk choice, why would you intentionally plan to run your workloads at a high capacity level? All copy-on-write file systems warn/advise against it, so I'd still avoid going that high if it can be avoided.
The issue with any system that gets too full is to find the next free space to write into. ZFS and any system that is copy-on-write can be especially susceptible to this issue as well as any log based system where there is a background process to clear the now-unused data after it was overwritten in principle but the new data was actually written somewhere else. There is also an issue of fragmentation that will affect performance as it gets harder to find large contiguous space to write into and the data needs to be written fragmented in different places.
All of this has no relation to the media be it HDD or SSD.