I'm a bit embarrassed that I've never seen or used the -iops parameter as part of the esxcli
deviceconfig
options.
I stumbled upon an article and proceeded to benchmark before and after using various block sizes and read/write values and am seeing ungodly gains in performance of our soon-to-be-deployed v3700 SAN, regardless of disk configuration.
Monitoring throughput of the 8 iSCSI interfaces (it's an active/active design) with Solarwinds free SNMP bandwidth monitor on the stacked switches, I saw each interface jump up from 120ish Mbps (or 12% utilization) to 325ish Mbps (~33% utilization). Some of them were even pegged, which led me to think there was packet loss on that particular port (didn't get a chance to check the port statistics before the IOMeter job completed).
So what's the downside here, besides oversaturating a particular path (link)? What's a safe and happy setting people are using? Seems too good to be true.
Most of the iSCSI MPIO configurations I run with VMware end up using the SAN vendor's recommended path-selection policies.
For round-robin, most of the storage systems I've used recommend lowering the default number of I/O operating before switching paths from the default
1000
to1
.Edit: I had a v3700 at a recent job. The IBM best practices (see page #12) reflected that the paths should be fixed on ESXi 5.1 and round-robin on 5.5. For tuning, we tuned down to
1
I/O operation.