I have a pool of 28 2TB-disks (56T) in 4 arrays of 7 disks. Since it's raidz1 (~RAID5), I'd expect 1 disk to be used for parity in each array, so the resulting volume should be 2TB*4*(7-1)=48TB, right?
Now, what I see on my system:
$ zpool list volume
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
volume 50.5T 308K 50.5T 0% 1.00x ONLINE -
$ zfs list volume
NAME USED AVAIL REFER MOUNTPOINT
volume 2.00T 40.3T 75.8K /volume
$ df -h /volume
Filesystem Size Used Available Capacity Mounted on
volume 42T 75K 40T 1% /volume
So, there are only 42T instead of 48T. Where are the missing 6TB? And where does the number 50.5T come from?
A 2TB disk is not 2 TiBi in size - it's only 2*10^12 / 2^30 ~ 1862 GiBi.
4 arrays of 6 effective disks each would be 24 * 1862 = 44703 GiBI, or 43.6 TiBi of real, usable storage.
I reckon it has some additional overhead you're not taking into account - IIRC RAIDZ also does snapshots and scrubbing, which take up additional space.
To clarify the discrepancy in output between the commands:
The
zpool
command counts the disks that are being used for redundancy as space, while thezfs
command does not; thus, the 50.5 TB number is your raw disk size, while the 42T is after taking out the 4 disks for redundancy.Hard drive manufactures measure disk size in Base 10. Computers measure bytes in Base 2.
It should say 42 TiB to clarify the SI use of the tera- prefix.