I've currently got 3 ESXi servers (free version), using direct attached storage. I want to move to shared storage, because direct storage is very limiting. There are about a dozen VMs, most are fairly low requirements (Exchange is the only one with much IO).
Unfortunately, budget constraints mean I can't get a proper SAN. Can't even get something reasonable off ebay.
Basically, I have two choices. One is a Netgear ReadyNAS 3200. I'm kind of against this, because it's only got 2 gigabit ports, and if you're using one of them for management, that leaves 1 port for all our VMs. Not good.
The other is to pick up a Dell server (such a a T410), fairly low spec, and then to get a quad port gigabit card, and put openfiler on this. For the same kind of price, I can get a dual core xeon, 4GB of RAM, and 6TB of storage (which is enough) on a RAID card, with dual PSUs.
Has anyone done something like this before? Can anyone comment on the performance I might get for this kind of setup?
And yes, I know you should have a proper (new) SAN, dedicated network, replication, etc, but that simply isn't an option. Thoughts?
It's quite difficult to answer your question without knowing almost anything about your workload; it will work, sure... but how it will work in your specific scenario depends entirely on how you are going to use it; you should definitely run some performance monitoring on your current storage and then try to find some benchmarks for the two solutions you're evaluating.
I'd personally go for the Dell server, you'll get much more flexibility and (probably) performance; just try to get one with a lot of disk bays... and avoid at all cost creating a single big RAID 5 array, as it can and will severely damage performance if you put many VMs in it; RAID 10 is much faster, and having two or more arrays is a real improvement.
Also, a side note: if you want to be able to actually use that shared storage to move your VMs between the various servers, you'll need a Virtual Center. And then, ESXi will cease to be a free product. Without a VC and proper licensing, you can't actually do much more than now by simply replacing local storage with a shared one.
The T410 you are describing is probably the same price as getting a entry level iSCSI from Promise, such as the M210p
Also, you would be surprised as to how far one iSCSI port can go. I've seen almost 10 VM's running off of an iSCSI with one port and even when pulling backups I wouldn't reach more than 50% of the capacity.
On the other hand, I also built an iSCSI server using Linux and found it pretty straight forward.
To me the advantage of a iSCSI SAN is the hardware RAID, if you want to achieve the same level of quality on the Dell server, be ready to spend a couple of hundreds of bucks on the HW RAID card.
I also agree with Massimo on the RAID level, 10 is the way to go for VM's IMO.
You might want to check out Windows Storage Server or OpenFiler. Obviously, neither are ideal, but on a budget you have to do what you have to do.
I agree with Massimo that an answer to this question depends heavily on the workload. I have personally used Openfiler in production in a low load fileserver. Take in mind that Openfiler doesn't support SCSI reservations and this can cause some problems. Check out this KB for more info.
iSCSI can be a cheap SAN solution, just make sure that the underlying harddrives and controllers have a decent performance. Two bonded 1 GB NICs can give you about 200 MB/s of throughput, but again, this depends on the type of I/O (random, sequential) of the VMs.