I'd like to know the exact math of how to calculate usable capacity on a NetApp filer's aggregates. Out of experience I've been using a magic factor of 0.65-0,7 times the net RG capacity of all the aggregate's RGs.
Just as a simple example: 3 shelfs, each with 24 1TB spindles with 3 spares form an aggregate with RG-size 15, ie 3 RGs, one plex, total 45 TB capacity. The usable capacity from the RGs is ((15-2)*3) = 39TB. There are no volumes on this aggregate and the aggregate snap reserve is 5%.
The system reports a usable capacity of 27TB on that aggregate which is pretty much 39TB times the magical factor. Can anyone provide some insight?
If you have a NOW account, this article should help answer your question. In general, you start with the physical capacity of the disk, subtract space lost to parity (depends on whether you are using single- or dual-parity and the size of your RAID groups), 10% for system overhead, and 20% for the default snapshot reservation. You can adjust the snapshot reservation, but the remaining overhead is fixed.
Regarding parity: using single parity, you lose one disk worth of space for each RAID group (and two disks worth of space for dual parity).
A few things are missing that account for the difference:
The whole lot is explained pretty thoroughly by this blog post.
You can get the physical size vs. the "Right-Size" of the disks by running "sysconfig -r" at a filer. Put that in your formula, take binary units into account and it'll work out. You can also get the "right-size" of various disks from the storage management guide for ONTAP.