In some ways I guess this is a piece of string question, however even if there is a "this fits most situations" answer I have no idea what it is, so...
I have a SAN on evaluation, an HP P4000. I'd like to use IOMeter to do some benchmarking to see what it's capable of.
However, I have no idea what combination of block size, read/write split, and random/sequential split is applicable to different usages.
For example how would you simulate some Exchange activity, some SQL activity, some general VM activity and so on.
I know how to add workers and set them loose with different settings, but what settings should I use?
Thanks.
Storage System Performance Analysis with Iometer: http://communities.vmware.com/docs/DOC-3961
From a SQL Server perspective
On a SQL Server box you would preferably test the disks with the following parameters, depending on where you will be storing the MDF, NDF, LDF and TEMPDB files:
All disks (MDF, NDF, LDF, TEMPDB)
Serially written disks (LDF, TEMPDB)
Serially read disk (MDF, NDF)
64 KiB Extent Read
128 KiB Read-Ahead
256 KiB Read-Ahead
512 KiB Read-Ahead
1024 KiB Read-Ahead (Enterprise Edition)
You can vary the percentage of random reads to see if your results vary, but I found the above values to be a pretty good starting point.
SQL Server Best Practices Article (MSDN)
Troubleshooting Slow Disk I/O in SQL Server (Blogs MSDN)
SQL Server Perfmon (Performance Monitor) Best Practices(Brent Ozar)
How to examine IO subsystem latencies from within SQL Server (SQLSkills)
Exchange and SQL activity tends to the frequent, smaller IO/Ops end of the scale. Exchange has quite a few larger I/O bursts as attachments are written/pulled. Backup intervals and long running queries can really play hob as well, and are probably your peak I/O instances. Exchange Online Defrag is our IO-peak for Exchange, and SQL Backups is our I/O peak for our SQL server.
Exchange Online Defrag involves a lot of I/O ops, but not much throughput so the average transfer size is small, 512b small, and a lot of them. Read/Write ratio varies tremendously, but for a well maintained Exchange DB it should be mostly reads. This will be significantly random, but with enough sequential access to keep it interesting (no I don't have exact ratios).
SQL Backups involve a variety of sizes, but unlike online-defrag throughput is actually high as well. Plan for a mix of 512b to 4kb transfer sizes. Read/write ratios depend on where the data is ending up! Writes can be very high speed but (depending on the backup script) almost entirely sequential. The reads are going to be 100% random.
General VM activity depends on what's in the VMs. If you've got Exchange or SQL in there, then monitor for that. If by general you mean "general file-serving" such as web or CIFS sharing, well, that depends on what they're doing; CAD engineers have very different access patterns than an office full of Purchasing Office clerks. There is no generic I/O pattern for 'general VM activity'. You have to plan for what you actually have in VM.