I have a question about a non-optimal setup and the practical implications of this. Ideally you would place the ESXi server right in the same room as the FreeNas white box end of question.
My situation is this: I have a run of ~125ft of Cat 5e connecting a ESXi server to a FreeNas whitebox in the server room. I know the distance of the ethernet cable is within the maximum distance for ethernet traffic but I have two questions...
- Can Cat 5e support gigbit speeds at that distance if the switch on the back end is a linksys SRW-2048?
- Should I be concerned about the distance causing data read and write timeouts in the SCSI portion--(disk operations of the ESXi)?
A waveform in copper propagates at 0.95c; it therefore takes your data 140 ns to transit your 40 m Ethernet cable. At 1000 megabits per second, or one naosecond per bit, that's 140 bits, which is less than the length of an IPv4 header, let alone the switching latency. Propagation delay is therefore negligible.
The cable run range supported by healthy Cat5e cabling for 1000BaseT should be 100m so you are comfortably within that.
iSCSI timeouts, and the SCSI timeouts within your guests, are typically many orders of magnitude higher than the latency of a switched ethernet link. I think the relevant default for ESXi is 10 seconds for example while I'd expect latency in your case to be under 100 microseconds. Clearly performance would be abysmal if the link latency was a few hundred milliseconds but it wouldn't actually fail.
What would concern me (a lot) more than the distance of the link is whether the switch in question is really suitable for iSCSI traffic in terms of providing consistent performance under stress. If you are just connecting a small number of servers to a single storage system, over a small number of GigE ports then it might be OK but it would be a good idea to monitor latency over time to make sure it is performing as required. At a minimum make sure you have hardware flow control enabled correctly, use VLANs to segregate the iSCSI traffic and from what I can see avoid using QoS.
As a complement to the other answers : Vmware ESX/ESXi is very touchy about response times. If you have frequent timeout errors, it's probably because your disk subsystem isn't fast enough (VMware generates lots of random IOs).