We're speccing up some 10GbE switches for integrating a few older servers into our Equallogic SAN, and we're noticing quite a price gap between SFP+ and Copper (Cat 6A) equipment (Dell 8024F vs 8024).
I'm not really sure what the real-world difference is between the form factors. The Dell guys tell me that SFP+ has a lower latency, but couldn't tell me much more than that besides that our M1000e and PS6010XV chassis's only comes with SFP+ uplinks (and SFP+ is substantially cheaper).
The latency is basically negligible. The 10GBASE-T has a latency of <1microsecond. SFP+ has less latency in itself, but SPF doesn't include some of the physical transceiver (which may or may not add latency); hence the need for a physical module (or direct copper cables).
The biggest differences are price, as you've noted, and distance. SFP+ Direct Copper the cables have to be <15m (10m for certain cables). 10GBASE-T goes the standard 100m. Cat6 cabling is quite cheap (compared to other 10G cabling), and I would suspect that the equipment manufacturers "make up for that" in the price in addition to 10GBASE-T not being as popular yet.
The 10GBASE-T standard also uses more electricity (which causes both the increased distance and latency). The extra amount used isn't normally a factor.
"SFP+ has a lower latency" due the much better noise/interference isolation, but copper is generally less fragile/more durable (if it will be in a "higher chance of being re-handled" environment).
The 802.3an (10GBase-T) standard calls for latency of 2.5 microseconds or better. You are dealing with storage that still has latency measured in milliseconds. The difference might matter for extremely specialized high-performance computing applications, it can't possibly have any significant impact on your SAN performance.
How does the cost difference look after you have purchased the actual cables? You may find that market prices for CAT6A patch cables are becoming almost similar to prices for CAT5e, whereas the SFP+ cabling could actually be a significant component of your project cost. (It may even cancel out the difference in switch prices.)
I would suggest that overall project cost is likely to be the most important deciding factor.
(Disclaimer: there is no 10GbE at all in my current environment.)
Here is a couple of links talking about latency with 10GBase-t. Basically 10GBASE-T is slower than 1000Base-t or gigiabit for small packets. If you are doing something like iSCSI then it will be insignificant, but if you are doing hundreds of thousands of short key/value lookups between servers that traverse several switches it can be significant and surprising that it's slower than gigabit...
http://www.datacenterknowledge.com/archives/2012/11/27/data-center-infrastructure-benefits-of-deploying-sfp-fiber-vs-10gbase-t/
http://www.plxtech.com/files/pdf/support/10gbaset/whitepapers/10GBase-T_1000Base-T_Switches.pdf
Note: The latency is from 10GBase-T, not copper. If you do SFP with integrated twinax cable, that is copper but doesn't have the latency problems of 10GBase-T.
Comparing SFP+ to 10GBASE-T.
Pros of SFP+
Cons of SFP+
Pros of 10GBASE-T
Cons of 10GBASE-T
Until/unless they can get the price and power for 10GBASE-T down it IMO has fairly limited utility.