I have always used hardware based RAID because it (IMHO) is on the right level (feel free to dispute this), and because OS failures are more common to me than hardware issues. Thus if the OS fails the RAID is gone and so is the data, whereas - on a hardware level regardless of OS - the data remains.
However on a recent Stack Overflow podcast they stated they would not use hardware RAID as software RAID is better developed and thus runs better.
So my question is, is there any reasons to choose one over the other?
I prefer software RAID.
Software RAID has the big advantage of not being tied to a particular set of hardware. For example, I've had controller and/or mainboard failures which result in loss of the array.
Today's CPUs are plenty fast enough to handle parity on RAID-5 variants. I've also never had any issue with bus saturation from multiple concurrent reads.
I prefer HW raid, 'cause if you have to pull good disks out of a dead machine you're not limited to the OS configuration of the raid "array".
You do keep backups of your RAID controllers config, don't you?
So just load that up on a donor machine, slot in the drives (in the right order! You did label your drives before your pulled them right?) and restart on a clean OS and your data is recovered.
THE OS DRIVES ARE NOT IMPORTANT DRIVES TO KEEP. THE MOST IMPORTANT STUFF TO KEEP IS THE DATA DRIVES!!!!
(You do backup your DATA drives, right?)
An important consideration is reliability; in the end, both the hardware RAID and the software RAID are just software implementations of the algorithm. Therefore both are susceptible to bugs in software.
After many years of running software RAID setups in Linux I've never run into a bug that caused data loss. But I've seen several cases of complete data loss in a very expensive hardware RAID from a reputable manufacturer.
Two lessons to learn from this:
Hardware RAID controllers usually come with battery backed RAM cache which speeds up write operations, even when using software RAID, so if I can, I always try to get hardware RAID with battery cache, and than run software RAID on top of it if controller firmware isn't up to task.
I think Jeff's experience with his RAID arrays is down to getting (and relying) on crap/cheepest RAID controllers. "Wow, this RAID array does a billon gigaflops a second and I got it for £10 on eBay!".
If you value your data - get a good, proven, reliable RAID controller.
Even better, get two (with fail over)
Even better, get with the 21st century and get a dedicated external FC / iSCSI connected disk array with built in fault tolerance / ZSPOF - dual path, dual RAID, RAID6 or 10 (or 20 or 50), and hot spares.
Yes, it's expensive. But how expensive would it be if the entire SO site was trashed.
It depends. For simple mirroring scenarios, I prefer software RAID, because as Jason W said, you can always remove one of the drives and stick it in another machine.
For other scenarios (RAID 0, RAID 5, or RAID 10), a single drive isn't much use on its own anyways, so I prefer hardware RAID.
Regardless (and I say this with all due respect and love guys), you shouldn't make your decisions based on what Stackoverflow -- a group of software guys -- has or hasn't done.
Software RAID has failed to do its job for me on a number of occasions, hardware RAID never has. That said, cheapo hardware RAID is as worse than good software, spend a few £$€ to get good controllers.
Software Raid is dependant on the OS. Hardware Raid is dependant on the card and OS driver.
That is what it comes down to. It is very easy to get a replacement OS. Reinstall. A replacement raid card, especially after a few years can be impossible.
Some raid cards will hide the whole array from the OS but the driver will still know that it is raid. The best cards will handle all the low down stuff such as writing across disks, parity etc where as the worst will make the OS do everything.
The lower cards have a huge tendency to mess with the parity numbers and get them wrong. Imagine a few TB of data looking ok until you try to open it. Nightmare.
3ware cards are expensive but useless. The throughput speed is really bad under high load on windows and will pretty much lock up a system in linux if you enable nfs. Dell Perc cards (version 5 and 6) are great however. The earlier ones cheated a bit on Raid 10.
I've seen hardware RAID cards fail and take out an entire array. You are definitely adding another single point of failure with a hardware card, unless you're in a redundant configuration.
You should be aware that there's "hardware RAID" and then there's HARDWARE RAID. Google "fakeraid" for more info. Some "hardware RAID" cards actually do very little RAID processing in the card, and use custom drivers to do the RAID calculations in software, using the system's regular processor. This can lead to strange results. I had one of these system (a Windows 2003 server) start showing separate C and D drives, instead of one C drive, because something got confused somewhere. That should never be possible with a true hardware RAID, as it appears as one physical drive to the system.
I have very little experience with software RAID. I've been prejudiced strongly against it in the past, but am now moving towards using it, based on things I've heard here and elsewhere. I'd consider testing it for a future deployment.
On the other hand, I've moved away from any kind of in-server RAID to external RAID systems. Almost all my servers have zero drives installed. I'm in love with Xiotech systems, but other types of externals RAIDs have also served me well. I've never (knocking on wood) lost data from one yet.
The answer is quite different on Linux/Unix and Windows. S/W RAID on Linux is much better than Windows S/W RAID, which is limited in its support for different layouts and very slow (at least it is on Windows 2003 server). On Windows you are much better off with H/W RAID in just about every case.
Software RAID on Linux and Unix works much better than it does on Windows. This makes S/W RAID a reasonable choice on these platforms, although on a larger installation you will probably be better off with H/W RAID or a SAN.