I've got a total of 8 physical disks I can use (not including 2 for the mirrored OS) for a fairly high-use SQL Server.
It's used to serve a web-app (Win2k3, IIS6, ASP.Net) which has fairly heavy usage (400K users) with a fairly even proportion of writing and reading. However the database isn't very large (yet) - currently around 10G.
My initial thought is to split the database drives up in the following way:
- Hardware Raid-5 (3 discs) for Data
- Hardware Raid-5 (3 discs) for Logs
- Single disc for Tempdb
- Single disc for on-server backups
Essentially, I'm looking for the best performance by reducing disk contention between different writes (TempDB, Logs & Data).
I'd be curious to get any opinions of whether this appears to be the optimal set-up, or if you'd set up the disc layout differently.
We could spend another £5k for an external Raid setup to allow the Tempdb & backups to be RAID'ed as well, but I think it's a bit of overkill.
Any thoughts?
You should use RAID 1 or 10 if disk space is not your main problem, because they are a lot faster than RAID 5, especially for writes; also, RAID 5 is quite useless with only 3 disks, it gets more and more useful the bigger the arrays get.
I'd also avoid using a single non-raid disk for tempdb: if it breaks you will not lose anything, but your server will be down until you can replace it.
I'd configure the disks in 3 RAID 1 volumes of 2 disks each, one for data, one for transaction logs and one for tempdb; this leaves a disk, which you can use to RAID the backup volume or as a global hot spare; I'd avoid making the data volume a RAID 5 one, as this would make it slower.
If you need more data space and can cope with the loss of tempdb, you can use 4 disks in RAID 10 for data, 2 disks in RAID 1 for transaction logs and one single disk for tempdb.
You're not likely to need more than 2 RAID 1 disks for transaction logs: they shouldn't grow up too much, if you're backing them up regularly; and you should.
Read this by Kendal Van Dyke, not only has plenty of links to great articles but explains the different possible setups and performance implications.
I'd do something like this, but it depends in your server load and hardware capabilities.
RAID 10 is quicker than RAID 5. Other than that, the spec doesn't look too bad.
I would also recommend RAID10. The only drawback is you need a minimum of 4 disks per RAID10 array, so you'll be stuck without a dedicated array for the Tempdb and on server backups (assuming you put the db files and log files each on their own array).
So you'll have:
RAID1 (2 drives) = OS and SQL binaries; RAID10 (4 drives) = DB files; RAID10 (4 drives) = Log files; Tempdb = ?; On server backup = ?
I'd be tempted to do the following:
Backup across network to another server if possible.
Go 64bit with at least 8GB RAM.