My Azure service depends on a huge filetree and I try to deal with that filetree using VHDs. What I currently plan to do is create a VHD file, mount it, format it to NTFS and enable NTFS transparent compression, then copy the filetree there. Then I'll upload the VHD file to Windows Azure Blob Storage. When an Azure instance starts it will download the VHD file, mount it and use it transparently as if it was in a local folder.
Everything sounds great, but the filetree is something like 800 megabytes. With NTFS compression enabled it fits into a 600 megabytes VHD file. Yet if I ZIP the filetree it occupies around 400 megabytes.
I want to have a file as small as possible and without a need to unpack it - so that it can be "mounted" and used transparently. That's like VHD, but with better compression.
Can I have a VHD with better compression?
VHDs are already fairly well shrunk however, just like any normal hard drive they can fragment and depending on the type of VHD may need to be shrunk periodically to keep them smaller. You could try to shrink the VHD before uploading it to see if that improves it at all or not.
http://blogs.technet.com/b/tonyso/archive/2008/10/09/hyper-v-how-to-shrink-a-vhd-file.aspx
Seems that the guy in the link below managed to compress a bunch of large VMs with over 180 GB size into a VHDX drive with only 25 GB size !
Main technique as far as I have followed is activating socalled deduplication in the VHDX drive (works on servers and can be made to work with Win 8.1 and above) and then defragmenting and compacting the virtual disk as by standard.
I would be quite surprised if defragmenting really would have an effect on size here, but this is just what he has done.
Further packing with 7zip allowed even a compression down to 16 GB but with loosing the flexibility of a virtual disk.
https://deploymentresearch.com/beyond-zip-how-to-store-183-gb-of-vms-in-a-19-gb-file-using-powershell/