For a long time, I've been thinking about switching to RAID 10 on a few servers. Now that Ubuntu 10.04 LTS is live, it's time for an upgrade. The servers I'm using are HP Proliant ML115 (very good value). It has four internal 3.5" slots. I'm currently using one drive for the system and a RAID5 array (software) for the remaining three disks.
The problem is that this creates a single-point-of-failure on the boot drive. Hence I'd like to switch to a RAID10 array, as it would give me both better I/O performance and more reliability. The problem is only that good controller cards that supports RAID10 (such as 3Ware) cost almost as much as the server itself. Moreover software-RAID10 does not seem to work very well with Grub.
What is your advice? Should I just keep running RAID5? Have anyone been able to successfully install a software RAID10 without boot issues?
I would be inclined to go for RAID10 in this instance, unless you needed the extra space offered by the single+RAID5 arrangement. You get the same guaranteed redundancy (any one drive can fail and the array will survive) and slightly better redundancy in worse cases (RAID10 can survive 4 of the 6 "two drives failed at once" scenarios), and don't have the write penalty often experienced with RAID5.
You are likely to have trouble booting off RAID10, either implemented as a traditional nested array (two RAID1s in a RAID0) or using Linux's recent all-in-one RAID10 driver as both LILO and GRUB expect to have all the information needed to boot on one drive which it may not be with RAID0 or 10 (or software RAID5 for that matter - it works in hardware as the boot loader only sees one drive and the controller deals with where the data it actually spread amongst the drives).
There is an easy way around this though: just have a small partition (128MB should be more than enough - you only need room for a few kernel images and associated initrd files) at the beginning of each of the drives and set these up as a RAID1 array which is mounted as
/boot
. You just need to make sure that the boot loader is correctly installed on each drive, and all will work fine (once the kernel and initrd are loaded, they will cope with finding the main array and dealing with it properly).The software RAID10 driver has a number of options for tweaking block layout that can bring further performance benefits depending on your I/O load pattern (see here for some simple benchmarks) though I'm not aware of any distributions that support this for of RAID 10 from install yet (only the more traditional nested arrangement). If you want to try the RAID10 driver, and your distro doesn't support it at install time, you could install the entire base system into a RAID1 array as described for /boot above and build the RAID10 array with the rest of the disk space once booted into that.
For up to 4 drives, or as many SATA-drives you can connect to the motherboard, you are in many cases better served by using the motherboard SATA connectors and Linux MD software RAID than HW raid. For one thing, the on-board SATA connections go directly to the southbridge, with a speed of about 20 Gbit/s. Many HW controllers are slower. And then Linux MD RAID software is often faster and much more flexible and versatile than HW RAID. For example the Linux MD RAID10-far layout gives you almost RAID0 reading speed. And you can have multiple partitions of different RAID types with Linux MD RAID, for example a /boot with RAID1, and then /root and other partitions in raid10-far for speed, or RAID5 for space. A further argument is cost - buying an extra RAID controller is often more costly than just using the on-board SATA connections;-)
A setup with /boot on raid can be found on https://raid.wiki.kernel.org/index.php/Preventing_against_a_failing_disk .
More info on Linux RAID can be found on the Linux RAID kernel group wiki at https://raid.wiki.kernel.org/
Well, you pretty much answered your own question.
This implies that you can't use RAID10.
How about using RAID5 across all disks? This doesn't sound like a high-end (or traffic) server to me, so the performance penalty probably won't be that hard.
Edit: I just googled a bit, and it seems like Grub can't read software raid. It needs a bootloader on every disk that you want to boot up (in RAID5: every disk). This seems extremely clumsy to me, have you considered buying a used raid5 card from ebay?