I am getting poor write performance on a NFS share hosted with OpenIndiana 151a. I have 2x 1 TB Mirror (Seagate constellation drives), 2 SSD drives, 1 read cache and 1 write cache. My thinking was, that by using a write cache I would gain SSD like performance even though I have a pretty slow hard disk. I am getting an avg of 40 MB/s. I should note that both my SSD drives are SATA3 capable of 500 MB/s. I feel cheated! My setup is as follows:
- ESXi 5, NFS datastore, MTU set to 9000 on VMkernel and vSwitch
- VM1 OpenIndiana hosting the datastore, resides on direct attached storage.
- VM2 Windows XP with 2 10GB hard drive, 1 system drive, 2nd testing drive, using HD tune pro, resides on NFS datastore.
I am busy running a spectrum of IOmeter tests, will post results when I am done.
I am not seeing the advertised SATA3 speeds here, would I just be better off using the SSD's as direct attached storage? In other words, is NFS the problem here?
This depends entirely on what and how you're testing. Is that 40Mb/sec read or write speed? Or combined? If your benchmarking software is all sequential reads or writes, chances are that you won't see a benefit from the SSD cache units.
Your virtual switch setup should look something like this. I'm using NexentaStor instead of OpenIndiana, but the same principles apply. The storage server should have a private switch with a VMkernel port. That's to provide NFS to the ESXi host. Then, you present a datastore to the ESXi system and place your VMs on it. You don't need a physical adapter, and using the VMXNET3 network adapter, things should show up as 10GbE.
As far as I know ZIL isn't a "write cache", more a performant journal, thus you'd still be getting the raw disk's speed at best. Although 40MB/s is less than that and that's where "how did you do those tests" comes into play.
Correction: I was told I'm wrong and ZIL does double function as write cache, but not sure how/if this is all correct.
vSphere mounts all NFS shares with sync, meaning all writes are done synchronous. If you instead mount the LUNs over iSCSI - all specifically synchronous writes will be synchronous - and the rest will be asynchronous.
To answer regarding the SSD as ZIL then; For a continuous stream of written (synchronous) data, it will be limited to the write speed of the (given i/o pattern of the) disks. ZIL is really more useful to achieve high random iops than to raw synchronous continuous writes.
In most cases, the amount of sequential data is not as important of a factor as how many truly random writes and reads you can handle - especially not in virtualized environments where you are likely to have many different "clients" hitting the storage. Consider async writes if you need high throughput by using iSCSI instead of NFS.