I'm using an HP Smart Array P400 and seeing the controller take a rather large amount of overhead I would not expect and am wondering where it is going.
I have six SAS drives hooked up. They're all marked 146GB except one. (One happens to be 300GB but this won't matter for RAID.) I'm not sure if this means 146,000,000,000 bytes or 156,766,306,304 bytes or what.
In ACU, under Physical Drives it shows them as 146GB. When I create an array from them, unused space (before creating a logical disk or setting the redundancy level) shows as 820.2GB.
Since 146*6=876GB and not 820.2GB, at first I would have thought that the disk sizes were being quoted in decimal gigs (GB = 10^9) and the array size in binary gigs (GiB = 2^30).
However, if I assume this, the numbers still don't work out. 146GB in binary would be 135.973GiB, and six of them would be 815.839GiB.
815.8GiB is smaller than the 820.2GB that ACU is quoting as the array size, which logically means it must be quoting both the drive sizes and array size in the same units (be them either binary or decimal gigs).
But if this is the case, then 55.8GB, or a whopping 6.4% of the array has mysteriously vanished.
Now, I know the RAID controller probably places some metadata on the drives so I can't expect 100% of the space to be available. But I would expect this metadata should only be on the order of a few megabytes at most. What accounts for a loss of 55.8GB over six drives?
To clarify, we are not talking about losses due to redundancy. For instance, RAID1-0 makes 50% of the space available, for six drives RAID5 makes 83.3% of the space available, etc., but what I'm talking about here is space that is lost before redundancy is even chosen. This space would be lost even with RAID0, which should expose nearly 100% of the space.
I wouldn't really worry about it. It is what it is. You can't change it. Plan accordingly.
You've encountered the difference between Gigabyte and Gibibyte.
That accounts for the drive size difference.
Here's a 6-disk RAID 1+0 array comprised of 300GB SAS drives on a Smart Array P410 controller. Instead of 900GB usable space, it's 838GB:
However, the same disks, when run in a Nexenta/ZFS setup with LSI SAS controllers, show the following during format:
So I'm really working with 279.4GB disks.
(3 x 279.4) = 838.2GB
, which is close to the 838.1GB available space provided in the Smart Array-based logical drive.Running the same check for a 146GB drive on one of my ZFS systems shows the disks registering as:
So, 136.73GB.
(6 x 136.73) = 820.38
, versus the 820.2GB you see on your system.This means that your usable space is just a function of the drive's reported size and definitely not an issue with the HP Smart Array RAID controller overhead.