When I search around for example RAID1 installations, it seems that admins tend to place their swap partition under RAID1.
To me, it is more intuitive to have two disks, each with a large partition for use by RAID1 and the system partition and with a smaller partition for swap, outside of the RAID array.
What is the worst-case-scenario if I lost a disk and effectively half of my swap space while the system is running?
Should I expect to see a performance increase or decrease while mirroring a swap volume vs having two separate swap volumes outside of RAID?
If swap should be mirrored, does it make more sense to give swap its own RAID1 array, or does it make more sense to partition one big RAID1 array with LVM?
(Note/4. I'm not sure if mdX can be partitioned without LVM, but the debian installer leads me to believe that it cannot)
If you are using RAID1 you won't lose half your swap, only one of the two mirrors. The worst case here is you'll lose any performance benefit you might otherwise have gained. If you have two separate swap areas on the individual drives the kernel will use both in a fashion similar to RAID0 (if they have the same priority set) or JBOD (if priorities differ, using the top priority area until full then the next) so if one of the drives dies your system is likely to fall over as soon as any access to the swap area(s) is needed. This is why swap spaces usually live on the RAID1 volumes - it is simply safer and that is more important than performance usually.
Two separate swap areas would get used similar to RAID0 so you would expect to see a performance increase generally, though it depends on the other load your drives are under at the time. With modern kernels the RAID1 driver can try to guess which drive is best to read each block from so you might get a bit of the read performance boost, though obviously for writing to swap you won't as both mirrors must be updated. On most modern setups the performance of swap is not as important as its safety - RAM is relatively cheap these days so unless you are butting against the limit of how much RAM your motherboard can take you should aim to have enough RAM so that swap space is used as little as possible anyway.
It will make little difference if you are using the same pair of disks. A common reason for having swap on a separate array is when using RAID5/6 for the main array (which is not true in your case) to avoid and paging out to the swap areas being hit by the RAID5/6 write performance issue. You can probably tune performance by trying to ensure that the swap areas are close to the busiest part of the disks (so if you have a 1Tb array with a 250Gb logical volume used for your most busy active database files, putting youw swap volume next to that) to reduce head movements while swapping - but really such tweaks are not time well spent as by the time you are swapping heavily the %-or-two benefit won't be enough to make the difference between performing OK and not.
I believe you can partition software RAID volumes as far as the kernel is concerned, but that doesn't mean the installer understands such arrangements. In examples not using LVM I've always seen the drives divided into partitions and having a separate RAID array for each partition, rather than one large RAID volume which is partitioned. I recommend the LVM method unless you have specific reason to avoid it as it is more flexible and (in my experience) no less reliable than other arrangements.
You can have the benefit of RAID0 fast access (although reads only) and security of RAID1 on Linux if you use RAID10. Linux MD driver can create RAID10 volumes on any number of drives, from 2 up. To get the speed benefit you need to specify the layout of array as "far" (
-p f2
), this way the read performance is similar to RAID0 while the write performace is only slightly slower than RAID1.