We have an iSCSI SAN unit connected to a cluster of ESX servers. The servers are all managed by an instance of vCenter. The vCenter instance manages a dozen Windows Server VMs.
Most of the VMs have more than one volume. Those volumes appear in the VM settings in vCenter. Drive C in Windows appears as Hard disk 1 in vCenter. Drive D appears as Hard disk 2, and so on. In other words, the SAN is obfuscated from the servers.
One server, however, is configured differently. Its C drive is handled by vCenter, but its second volume is directly connected to the SAN via Windows iSCSI Initiator. When I asked the server admin why he had configured it that way, he asked, "Why would you want a middleman handling your volumes?" I tried to explain that vCenter's HA and Snapshot features won't cover the second volume, but he remains unconvinced.
I remain unconvinced as well. Though it seems like all the VM's volumes should be handled by vCenter, I could be wrong. Have you configured your VMs in a similar manner? Something where the boot disk is presented to the VM by vCenter, but all other volumes are directly connected to the SAN?
Yes.
Sometimes people do this. Reasons may include:
vSphere won't be able to snapshot or really do much with these direct-attached volumes. I'm not an advocate of doing this unless absolutely necessary. It causes confusion and complicates networking design, DR and is rarely documented well.
I worked in an environment where a particular client did this on every one of their 900 virtual machines. A dreadful mix of CIFS, iSCSI, NFS presented from multiple SAN arrays directly to the VMs instead of VMDKs.
We currently have a file server configured in a similar manner. The boot drive and several others are through VMWare and a data drive is a direct iSCSI connection. It was previously configured this way to overcome the 2TB limits of hard drives in VMWare. And yes, the features of vCenter's snapshot and HA will not apply to that volume as it is presented directly to the server. This has worked fine for us as that 'drive' is covered by snapshots on the SAN.
As far as the middleman comment made by the admin, I'd have to say that there is no issue with performance and no reason not to take advantage of the snapshot and HA capabilities. This article in particular shows that the hypervisor's overhead for storage is minimal. When sending to centralized storage your bottleneck wouldn't be the hypervisor, but the medium connecting the hosts to your storage (fiber, ethernet, etc...) and past that the drives and controllers themselves. The only time I could see the hypervisor becoming a bottleneck is if you've overloaded the host with more virtually allocated resources than it physically has and all VMs calling for them at once. If you're absolutely convinced that the hypervisor would cause a performance issue you can use a direct device mapping. If you take some time to search for them there have been multiple case studies on performance of storage and VMWare.
Personally I've had no issues using VMWare to manage storage, especially with version 5.5 and the new limits on storage. Hope this helps.