I have approximately 10 servers configured as such:
- AMD 64 X2 AM2 dual or quad core chips
- 4-16GB RAM DDR800+
- Dual SATA II Software Mirrored Boot Drives (Win Server)
I would like to configure them to use their data drives on a cheap DIY NAS in the server.
I am considering using RAID10 or RAID5/6 with 8 SATA II drives in a separate machine. That machine would have 2 GbE ports, connected to a GbE switch that each of the servers connect to with a dedicated GbE port (a separate one for the Internet Uplink).
Is this a really poor idea? How much bandwidth am I really going to be able to get to go through these SATA drives, for 10 servers?
How much bandwidth you get through the drives is going to depend on a number of factors. The speed of the drives, the RAID configuration you use, the RAID controller, etc. You didn't mention anything about the workload that these systems are going to be using the NAS for. If it's just for document storage a setup like this is more than enough. If you're doing anything with high performance I/O requirements, such as a video editing, I think you'll find the setup lacking.
I've found that performance really lags on the cheaper NAS boxes. I'm not sure what the problem was, but I had a few lower-end SNAP! appliances and they were dogs.
If you're building the NAS, then you have the opportunity to build speed into the devices. Since you're probably not wanting to use expensive disks (ie faster w/ more cache), you should probably make up for it with spindles. 8 1TB drives in a RAID10 will give you plenty of speed and reliability with 4TB usable. RAID6 on the other hand would give you 6TB usable but you'd have to compute two checksums for every write.
I think this will largely depend on your networking hardware. If your cheap server has a cheap nic or you are using a cheap switch you could see anywhere from 20-80% of what gigabit is capable of.
For the making sense part, I think that will depend on what the servers are doing. My gut tells me this solution isn't exactly what you need, it sounds like too many servers depending on inferior quality hardware.
if your NAS server has enough slots for quad gigE cards, I would recommend using a point-to-point connection between it and the 10 clients instead of putting a switch in between(ie, cross-over cables between NAS and client). This will give you maximum bandwidth, at a cost of future expansion and probably a up front cost of buying many quad gigE cards.
It really depends on what you want to do with the NAS you're building. What sort of data will reside on the NAS? Databases? Files? Video? VM? How hard will each of the servers be hitting the NAS?
If you are looking at hooking in 10 servers to the NAS, you might want to look at bonding your GigE ports together.