I read the VMDirectPath materials, and they say that it helps the performance on the 10Gbs card case. Sure, if multiple VMs directly access the physical 10GbE card, the VMM bottleneck is avoided with VMDirectPath.
Here I have two questions:
If I have 3 1GbE cards and want to utilize the VMDirectPath benefit, should I first bind the 3 cards? Or should I somehow bind a VM to a physical NIC with the help of VMDirectPath? Is it possible?
If I want to configure my disks into passthrough mode in ESXi, can the performance get boosted?
With VMDirectPath don't even try thinking about trying to get multiple VMs to talk to the same physical hardware, it'll either not work at all or not work stably.
Now onto your questions, with only three x 1Gbps NICs you'll see only minor performance gains by using VMDirectPath, and then really only in terms of latency, hardly any additional bandwidth. Using the regular vSwitch/port group method can saturate a single 10Gbps NIC and is easier to setup and manage and doesn't force compromises like VMDirectPath does such an losing vMotion ability. If you want the benefit of the second and third NICs simply add them to the vSwitch, cable them to your switch/es correctly and set the pathing policy - it's far easier than passing them through and teaming them in-VM.
Yes you can pass through your disks if you really want, either using physical mode or via VMDirectPath - if you use the latter you lose the whole controller to a single VM of course. And yes there would be a performance gain, I'd suggest somewhere between 5% and 25% overall depending on system. Again I wouldn't bother though, you're moving to virtualisation yet by using these techniques losing much of the benefits of doing so, I believe it's counter-productive.
Assuming you're talking about VMDirectPath, then I think you're looking at the wrong solution for your problem.
When you use VMDirectPath you lose following features:
I think you're much better off looking at binding the 3 1Gb ethernet adapters together in a standard link aggregation mode. If you implement NIC teaming in load balancing mode using "Route based on IP hash" as the setting, then for a minor increase in CPU utilisation, you get an effective 3Gb network adapter which handles the failure of 1 of the ethernet cards, and you are still able to using VMotion and all the other nice things in vSphere.
There's lots more information on network configuration in this VMware PDF
Are you actually seeing a performance issue with your disks? It sounds like you're essentially willing to lose all the benefits of virtualisation for a few % increase in performance, which kind of makes me think you'd be better off sticking to dedicated hardware.