Context:
I have a SAN operating system (Nexenta) that runs on ESXi. It has a couple of HBAs passed through to it already, via VT-d. All of the HBAs for the SAN OS are full (no free connectors). I recently purchased some SATA SSDs for Nexenta and attached them directly to the motherboard's on-board SATA controller.
I can add those new SSDs to the Nexenta VM by adding "physical disks" to the VM profile in vSphere. Alternatively, I could connect then to one of the HBAs, but I'd have to disconnect/move existing disks, which entails considerable hassle.
Question:
My question is: assuming that my HBAs don't do any fancy caching etc., and have the same available bus bandwidth and SATA specification as the onboard controller (the one connected to the new SSDs), is there a performance difference between attaching the physical disks to the VM via the vSphere disk add functionality, and attaching them to an HBA that is passed through to the VM via VT-d? Does the vSphere disk add method impose some relaying/request forwarding behavior that could negatively impact disk performance compared to native speeds?
Anecdotes for answers are good, but statistics are better: I know I probably won't notice a performance difference at first, since SSDs are fast. But I've been in the field long enough to know that if there is a problematic performance difference, it will manifest during production-critical activities, at the worst possible time :)
Please see the notes about configuring an all-in-one ZFS setup in my post at: Hosting a ZFS server as a virtual guest.
If you're talking about creating an SSD pool or adding drives as raw device-mappings (RDM's), disregard the rest of this answer. The preference there would be to run through the HBA versus RDM. Use SAS expanders if needed. The main reasons are easy portability, configuration simplicity (you don't want this or this) and consistency with what you've already configured. The performance impact is negligible. The management overhead isn't.
If you're instead thinking about using VMDK files on disks presented to ESXi, the general answer is that you can go either way. The performance difference probably won't matter. This assumes that the application is adding ZIL log or L2ARC cache devices to your virtualized ZFS system. There are instances where I'll add an SSD to a physical system and add VMDK's on top to present to the NexentaStor VM. Other cases, I may present an entire raw drive to the instance.
I base it on convenience versus performance. If performance were absolutely paramount, we wouldn't be virtualizing the storage, right? The convenience part comes with the flexibility to provide a reasonable performance boost to an existing system without dedicating an entire 100GB+ SSD to ZIL use (which only needs 4GB for my purposes). This leaves room for L2ARC if I'm constrained in some other way.
So my questions would be: