I have a Hyper-V host that has it's performance bound by the storage subsystem, which is a RAID 10 array.
I'd like to add two SSD PCIe cards and use them to create a mirrored fast tier using Windows 2012R2 tiered storage.
The question is how should I determine how large my fast tier should be? I can run the Storage Tier Optimization Report after I have bought and installed the SSDs, and find out whether they are correctly sized but how do I run the report before I install the SSDs?
What you want is to trace your system activity to see how much I/O requests can be satisfied by the fast SSD cache. In order to obtain a meaningful values, your should trace your system for a full work day, multiple times.
To do that, you can use both Windows performance monitor (disk counters) and the more in-depth
Xperf
tool. While Windows performance monitor is quite easy to use,Xperf
is significantly more challenging. You can read more about it here.With the total I/O bytes read/written from/to the storage subsystem, you can start to reason about your fast SSD tier.
Anyway, for what its worth, as a baseline I will use at SSD tier of about ~1/8 the main storage array. At the same time, I would look for significantly (still very fast) SATA/SAS disks rather than fast, but overpriced, PCIE storage.
This can not seriously be answered because a lot depends on the usage patterns ans required performance. You will hit bad times. Patch day - the malware removal tool runs - mostly come to my mind which will overblow whatever you throw at it.
But then if not that - the question really can not be answered without more details. VM's can vary very widely in their usage patterns (DNS / AD vs. a heavy used build server for example). Given a size of 8tb raw you have now - I would likely try to go with a 1tb size (2x1tb obviously as you want them mirrored) and see where it goes from there.
This really depends on the amount of hot data you have - and you can only determine this when you've deployed the solution.
In my opinion, a much more important factor is not only the amount of SSD storage you have, it's also the amount of disks. Storage pools performance is heavily dependent on the NumberOfColums you are using. As this value cannot be changed on existing virtual disks you propably want to get a good value when creating the disk.
Recommendations here are to use NumberOfColumns between 3 to 4 - so either 3 or 4. Having more columns means more speed but increases latency slightly.
Another important factor is the interleaving size. There are recommendations (what we're also using) to set a reduced interleaving size with 64k for hyper-v workloads. Default is 256K (at least until Server 2012 R2)
To reply your actual question:
It may be better to get at least 4 or 6 SSD drives and put them on a pool with at least 4 or 6 HDD drives. When planning for scale out, use 6 SSD + 6 HDD and add more HDD drives if you need more storage and optimization doesn't show that you're on the limits of your SSD drives.
Regarding pricing the solution may be in a similar price range than many PCIe cards but be more scalable and even faster.
A nice side effect is that you'll propably have more space on the SSD pool available since the multiple SSD drives provide propably more storage than the PCIe drives.
Unfortunately there is no tool that can predict this. So you will need to install the SSD in order to run the tool that tells you if you provided enough storage to get the IO that you require.
Quote: