I have just acquired a server with:
2x quadcore Xeons
48G ECC RAM
5x 160GB SSDs on an LSI 9260-8i
Before deploying the target platform, I'd like to collect as much benchmark data as possible, testing I/O with hardware RAID in various configurations, ZFS zRAID, as well as I/O performance on vSphere and with KVM virtualization. In order to see real disk I/O performance without cache effects, I tried running Iozone with a maximum file of more than twice the physical RAM as recommended in the documentation, so:
iozone -a -g100G
However, as one might expect, this takes far too long to be practicable. (I stopped the run after seven hours..)
I'd like to reduce the range of record and file sizes to values that might reflect realistic performance for an application server, hopefully getting the run times to under an hour or so.
Any ideas?
Thanks.
For a server with that much RAM, the direct-IO flag is your friend. That's
-I
flag:That'll tell it to not cache blocks or files, and to wait for the storage system to say that a write is fully committed before moving on. Performance will be understandably worse than if you could use the block-cache, but at least your test-runs will complete in reasonable time and you can get relative-to-each-other comparisons for each of your storage configs.
I'm not clear what your question is, exactly. If it's with regards to iozone itself, then I'm sorry I do not have anything else to offer that hasn't already been said.
Otherwise, if you are also looking for other tools with which to collect your baseline benchmark statistics -- Have you considered trying iometer? On top of rigorous disk workouts, this would allow you to also capture network IO performance characteristics, and runs on several platforms.
Will you also be monitoring performance of ESX itself? Then you'll be looking at esxtop, which will show you CPU, interrupt, memory, network, disk interface, disk VM and power management statistics.