We are looking to implement 10Gig ethernet for our ESX hosts and SANs. There are a number of solutions we have been looking at and it has really boiled down to what will give us the best bandwidth.
One solution was to run Intel X540-T2 cards with CAT6a to a Netgear XS712T switch.
The more I read into it the more favourable SFP+ becomes however, And with good availability of second hand switches like the Dell PowerConnect 8024F at a reasonable price point (<£2K GBP) that support stacking and low price difference in 10GBASE-T and 10GBASE-DA NICs it seems to make sense.
Does anyone have any direct experience with SFP+ and CAT6a/CAT7 cabling in a VMWare ESX environment and can offer advice?
We have a lot of 10GE in our datacenters and we never use CAT6a for it.
I would strongly recommend using SFP+. For Switch-Server connections you can use 10GE DAC cables (special copper cables with fixed SFP+ modules on both ends) that are not as expensive as optical cables/SFP+. We use these without problems to connect ESX hosts:
http://www.flexoptix.net/en/sfp-plus-copper-cable-10-gigabit-dac-1-meter.html
If you need longer patches in the future you can just switch from the DAC cables to optical SFP+ and cables. No need to exchange the switch.