Wouldn't you want your crucial hosts being served with the most signal-reliable/improved method of connectivity?
Gig Switch<->Cat5E<->Server?
or
Gig Switch<->Cat6<->Server?
My understanding about Cat6 is that it delivers improved signaling between two NICs. I've heard people suggest Cat5E for short distances and Cat6 for longer than 20'. Is this the correct assessment for choosing Cat6 over Cat5E cables?
I'd like to clear my notion of needing it for improved performance.
There is currently no reason to use Cat6 cables when connecting to hosts. Cat5E is all that is required for gigabit connectivity. In fact, if you upgrade to 10GBase-T in the future, it may still require replacing Cat6 with Cat6a.
I should add that the Cat6 that I have worked with in the past was much more difficult to route than Cat5e. I'm not sure if the thicker insulation is a requirement for the spec, but it was not fun to work with.
Proper termination of Cat 6 (6a) is much more critical than 5e. This will require a greater skill level for the installers. If you are simply using patch cables, then the distance is probably short. For 1Gb speeds, 5e will work fine for short runs and most long runs.
If you are using long runs through areas of high RF/EMI, consider Cat 6, otherwise Cat 5e will work fine.
If you have plans to upgrade to 10Gb, I would consider a fiber solution instead of copper in any case.
Some cable vendors may not want you to know that 1000BASE-T was designed to run 100 meters on Cat5, not even Cat5e. See Panduit white paper at Cisco site.
Another under-publicized fact: you may not want 10GBASE-T (10 gigabit Ethernet over twisted pair, even Cat6A) unless you want to live with maximum round-trip delay specs that are fifty (50) times slower than 10 gig over Infiniband-style copper cabling, 10GBASE-CX4. See the IEEE 802.3-2008 standards, section four, parts 44.3 and 55.11.
Within server racks, the 15 meter max length of shielded CX4 cables should be about right, though they're not inexpensive. For longer distances at 10 gigs, fiber is the way to go.
If you seriously can't get the budget for Cat6 (Cat6a, while we're at it) instead of Cat5E to all your servers, then I guess some length criterion like that is as reasonable as anything. But it really seems nonsensical to me to connect crucial servers with anything less than the best cabling you can lay hands on.
It really doesn't matter a whole heck of a lot. I know you might see a bit of signal loss over Gigabit speeds on Cat5e, but would any of your applications or users really know the difference?
If the price difference isn't that much, then by all means pay for Cat 6. Otherwise hang with the Cat5e.
Maybe I'm getting it super cheap compared to some of you but to me Cat6 is 20% more than Cat5e, so I go Cat6 every time.
Well, here's what I would do:
Are there already Cat5e cables running from all the servers to switches / patch panels?
If yes:
Are you having any physical layer connectivity issues? (i.e., frame errors, "flapping" ports, throughput issues, etc.)
If yes:
First, check your connections at the ports and verify you don't have any higher layer issues. If you narrow it down to the cable itself, then replace the Cat5e with Cat6.
If no:
It's working without problems. Leave it alone.
Don't already have any cables connecting everything?
Go ahead and run the Cat6 (Cat6a). It's more of a pain to work with, but you'll get some practice with it and get better at it and you'll have the best cable you can buy installed for future proofing. It's not really that much more expensive.
In our data center, I've got a mix of Cat6 and Cat5e. The Cat6 was used for newer devices I've added since the rack was originally installed. The other stuff still has the Cat5e because it's still working just fine and I've got this thing about fixing things that aren't broken: I don't do it.
Just my 2 cents worth ...