How does the Software Mirror in Windows Server systems compare to an affordable hardware/BIOS RAID?
I've had some rather bad experiences with onboard hardware RAID controllers, so I would want to avoid them in the future. OTOH I've used XP's and W2K3's software mirror without problems so far - however, I've also never run into a recovery scenario with a completely dead disk yet.
For a new Server (currently planned as W2K8 HyperV), we want redundant disks again. From my previous experiences, I'd use the software mirror.
I would like some feedback if that makes any sense at all - regarding performance, and recoverability in the case of an actual error, suggestions for setting up, choice of disks, etc.
(I've also had trouble with an €400 RAID controller card being incompatible with the mainboard, so it's not only cheap hardware to blame)
BIOS-based RAID is little more than a poorer version of software RAID, from everything I've read and encountered. Plus if your mobo is fried, you usually have to replace it with a similar mobo, due to the way that BIOS will format the disks to track the volumes. This is a DEFINITE consideration for recovery of data, as you can't just slap a mirrored disk into another system and pull data off it if the system can't see the volumes. There are also issues I've encountered where something will zap the config information from the BIOS and after a reboot it can't find the volumes anymore!
Software RAID on Windows tends to have a reputation for working well enough for redundancy, but not working well performance-wise, but from my research it looks like it is usually RAID 5 in software with Windows that you really take a hit. Linux-based software RAID is regarded as mature and very usable; often on-par with hardware schemes.
Hardware-wise, I've used a few kinds of Hardware RAID (3Ware and PERC) and I've had one disaster with it and a couple save-your-arse situations. Hardware RAID usually also allows for things like physical notifications (blippy lights and labels on ports) so you don't need to figure out which drive is failed and they tend to keep the underlying mechanisms hidden from the OS, so it's less of a pain in the a__ to configure and maintain. Some cards like 3Ware actually have some nice features; the hardware cards additionally tend to offer the best performance. It's also nice if they support hot-swap of drives, depending on your server usage scenario.
I think in your situation it depends on what this server is going to be used for. Even a relatively slow RAID with light use isn't a handicap. If it's something getting hammered...database server, for example, or busy mail server...go hardware. If it's light to medium use, use Windows software RAID, especially if you're not comfortable with the hardware version.
But generally speaking...avoid the BIOS RAID like the plague for a server or critical system.
Either way, make sure you have a recovery plan (backup) that allows full metal recovery...RAID is not a backup :-)
I would use a hardware mirror, software mirrors in windows have a problem with finding a bad drive once its gone bad, they also usually are not hot swappable.
The hardware raids I have used are hot swappable, meaning a failed drive can be changes asap without down time.
I personally like external raids (Sans digital has a good 2 drive system) just because it makes them easier to access and require less system customisation.
There are three kinds:
1) Hardware - I prefer those since they are independent of OS and usually have great performance. Cost for good model will be quite high.
2) Semi-hardware - This is, I think, most common kind and includes almost all consumer ones. Only some operations are supported in hardware but most of operations waste your processor time. Cost is low.
3) Software - I actually use quite few of those and I cannot complain much. While there is performance impact, it is not as high as one would think (I mostly played with mirroring). Cost is 0.
In case of good budget I would go with pure hardware one. However, software only solution is not too bad and for most (if not all) usages it is quite satisfactory. Just be sure that you leave enough processor time for it to do work (especially if you intend to have a lot of virtual machines there).
Personally I always go with something I am comfortable with. In your case it is software RAID and I see no problems if you go down that path.
I've tested real hardware systems before putting them into production by pulling a drive and ensuring it was rebuilt correctly. This allowed me to become familiar with how the RAID functioned so when a drive did fail I knew what to expect with it.
I have also done that with a fake raid card, before I knew it was fake raid, and it preformed the same. Mind you it was a Dell card included with the machine so it was likely at a higher standard then most other motherboard fake raid systems. So success there as well. Again test while in preproduction.
As a matter of practice I will only buy real hardware RAID now. For the arguement of needing to replace hardware RAID with the same controller if the controller fails, there are 2 points with that. The first is all major server vendors (Dell, HP, etc) will have replacements available for a set period of time, so just make sure you life cycle it before that date. You can also purchase most with 4 hour response times if parts are critical for you. Point two, I've also seen controller, where if the drive is RAID1, you can sometimes pull a drive and run it off a normal SAS, etc controller. This "may" work with a RAID1 since its a mirror drive, RAID5 would not be able to do this. Again test before hand to make sure this would work if its something you may want to do.
I've never used linux software RAID and only tinkered with Windows, but there have been good reports so you'll have to investigate for yourself.
Whatever you want to do, no matter what one you chose, test it by pulling a drive then having it rebuilt, before putting the server into production so you know what happens when things go south and how to recover it.
Hardware RAID controllers are the only devices can really assure a certain degree of reliability.
A RAID 1 configuration has to assure not only data are safe, but that the system can survive alive in case of disk failure and replace. I mean that being assured that a system will survive to a disk fail is not sufficient. It has to remain up and running through the entire process of fail-replace-rebuild. This implies that the entire system has to be fully and truly Hot Swappable either in HW, bus hot-plugging and software.
Especially on Windows systems, Software or BIOS-driven RAID implementations can't assure many of the above requirements.
Full software RAID may better performances than BIOS driven on same OS (linux), but a SW or BIOS RAID implies a number of difficulties that are often undervalued. Both disks have to be identical and the system should be able to reboot from the second disk in case of "primary" disk failure. Nothing impossible, but this makes a good implementation of a software raid not so trivial.
Obviously SO and BIOS raid are very very cheap, that is a good point for them.
My suggestion is to always use true HW raid controller (even a cheap controller it better that SW implementation) everywhere you need to improve availability.
If your bugget is a matter and availability is not, feel free to downgrade to a good BIOS driven RAID when supported by a strong OS (BSD, linux).
On Windows I totally avoid any non HW RAID.