I run a bunch of servers on no budget. I have several fast 1U ones, but they haven't enough storage and no more can be added. I want to build a DIY SAN running Linux with some SSD's. Mounting storage from the SAN on the servers would be possible using iSCSI, but I'm worried about the latency overhead of TCP and SCSI.
So I thought I might use eSATA instead. I realise there are cable-length limitations and that it's a lot less flexible, but that is OK. I also sort-of-assume that some of the consumer-grade SANs run embedded Linux and they seem to be able to pull of this feat. Googling has revealed no info on how to get Linux to export storage to other machines over eSATA. Can it be done?
Not with common hardware. the eSATA ports you have are 'host' type, not 'device'.
These days iSCSI is very efficient, thanks to optimized drivers that offload most of TCP handling to the card itself. Don't dismiss it without trying.
Another alternative (that i use very successfully) is AoE. Just run vBlade on your 'target' nodes, the drivers are already on the kernel. Just be sure you have JumboFrames on the SAN.
I'm pretty sure this cannot be done through software. SATA is designed to connect HDs to single computers, not as a multi-client protocol. The "consumer-grade SANs" you think of are probably simply external RAID boxes that don't run any OS at all.