We are using VMware vSphere and NetApp and are trying to troubleshoot some mind-melting space problems. Part of this problem is two colleagues disagreeing on how thick provisioned disks relate to used space on the storage.
These are thin-provisioned iSCSI SAN volumes on the NetApp and Space Reservation is showing as 'Disabled' when I run the lun show -v <path>
command.
Person 1 is saying that a thick provisioned vmdk of 100 GB will show as 100 GB of used space on the storage regardless of how much data is on that disk.
Person 2 is saying that a thick provisioned vmdk of 100 GB and lets
say 10 GB of data on the disk will only show as 10 GB of used space
on the storage.
Who is correct?
We're trying to figure out why the amount of free space shown in the vCentre datastore is less than the free space showing on the NetApp LUN. We have enabled space_alloc on the LUN and subsequently running the esxcli storage core device vaai status get -d <naa.id>
shows the Delete Status as Supported.
We also have NetApp's Storage Efficiency configured to run on the volumes.
If there was less space free on the LUN, this would suggest something is up with the automatic space reclamation but the problem is the other way around.
It depends.
From the point of view of VMware, a thick-provisioned disk's size is fully reserved, thus its whole size will be marked as used and detracted from the free space on the datastore, regardless of actual data usage in the virtual disk.
But from the point of view of the storage system, it could use its own thin-provisioning and thus only count as used the space where actual data has been written; it could even be running all sorts of data deduplication or whatever else to make better use of the available space, and the OS (ESXi in this case) would be blissfully unaware of it, unless it has storage-specific drivers and management tools.
I am assuming based on the words you used in your question that you are talking about NFS datastores, please correct me if not and I’ll update the answer.
Assuming modern versions of VMware and a VAAI configuration, I believe ESX will interpret a thick VMDK creation by instructing the storage to use thick provisioning. On Netapp, that works a little differently. They refer to it as space reservation, but in the background it’s always thin. The only difference is whether it allows you to consume the amount of space you have reserved for “thick” volumes. When you look at the used capacity of an aggregate, it will indeed show the entire volume’s capacity. That said, it is not frozen any free space or anything like that, and will continue to use the pool of free space for background tasks as needed.
If you end up zeroing the VMDK, of course it will consume the entire size on the storage, however any efficiency technology enabled on Netapp will subsequently reduce it.
You can easily test this by creating an empty volume from the Netapp, then changing the property from thick to thin and looking at the behaviour of the aggregate free space.