I have 4 hard drives, each with two partitions on them 10.1GB
for swap and 990.1GB
for the rest.
I took this and set up two MD devices with RAID10, one for the set of 4 swap partitions and one for the set of 4 other partitions.
I set the 20.2GB
Software RAID device as my swap and moved on to LVM.
This is as far as this guide takes me using software RAID. I would now like to set up LVM and encryption on it.
I created a new volume and logical volume; size 1.5TB
. I encrypted the volume and set the remaining 1.4TB
within the encrypted volume as the root (f ext4 /
).
Here are my questions:
Should I set up a separate Volume / Logical Volume for the 20.2GB
Software RAID device being used as a swap area?
Should I encrypt this volume as well if I'm encrypting the ext4 /
area?
Finish partitioning and write changes to disk
gives an error of:
You have selected the root file system to be stored on an encrypted partition. This feature requires a separate /boot partition on which the kernel and initrd can be stored.
you should go back and setup a /boot partition.
Where does this /boot
partition need to be setup? (Should each drive have an extra partition for this before setting up RAID?)
How much space does it need?
Should it be part of LVM?
Should it be encrypted?
/boot
needs to not be encrypted otherwise the boot loader (unless I'm behind the times and one of them supports encrypted volumes) will not be able to ready the Kernel and initrd. It does not need to be encrypted as it should never contain anything other than the kernel, the initrd, and perhaps a few other support files.The the device that is your LVM PV is encrypted, then
/boot
will need to be elsewhere: probably a separate RAID volume. If the device used as the PV is not encrypted (instead you encrypted the LV that is to be/
) then/boot
could be in the LVM except for the GRUB-can't-boot-off-all-RAID-types issue (see below).Historically
/boot
had to be near the start of the disk, but modern boot loaders generally remove this requirement. A few hundred Mb should be perfectly sufficient, but with such large drives being standard these days there will be no harm in making it bigger just in case unless you are constrained by trying to fit into a very small device (say, a small SD card in a Pi or similar) as might be the case for an embedded system.Most boot loaders do not support booting off RAID or if they do they only support booting off RAID1 (where every drive has a copy all the data) "by accident", so create the small partition on all the drives and use a RAID1 array over them. This way
/boot
is readable as long as at least one drive is in a working state. Make sure the boot loaded installs into the MBR of all four drives on install, otherwise if you BIOS boots off another (due to the first being offline for instance) you will have to mess around getting the loader's MBR onto the other drive(s) at that point rather than it already being there.Update: As per Nick's comment below, modern boot loaders can deal directly with some forms of encrypted volumes so depending on your target setup there are now less things to worry about.
I haven't set up software RAID-10 using the installer, but I think some of the things I ran into while setting up Debian with RAID-1 + LVM + encryption might help. I don't know how Debian's console installer differs from Ubuntu's installer, so I can't offer details on how to do this.
For
/boot
, GRUB2 hasraid
andlvm
modules that can be loaded with itsinsmod
command which should handle Linux's md raid10 layout. The exact details are apparently here but the site is down. Based on the information I'm getting from their manual (you may have somewhere from 31KiB to 1MiB, mycore.img
is 24KB already, andraid.mod
andlvm.mod
are 6KB each) so you may or may not be able to use it depending on how much space your partitioning tool wastes. Even if it did fit, you may not be able to get Ubuntu's installer to set it up for you. Unless you feel like risking more time on it, I'd stick to David's separate partition using RAID-1 (which won't need any extra modules since it "accidentally" works as long as you install grub to all of the drives' MBRs individually). Either way, it can't be encrypted.As for data and swap, if you are planning to RAID-10 both of them and enter them into LVM and encrypt them, then there isn't a point to making them separate partitions. Make one giant partition on each drive and handle the division in LVM.
So if you aren't going to try the grub2 lvm+raid modules, your partitions should look something like:
with
At this point, you create a LVM group using the larger RAID device as its physical volume, and then whatever logical volumes you'd like.
If you really want whole disk encryption then I would set up
/tmp
as a logical volume with a random key, and then/
being an encrypted logical volume covering the rest of the space. Keeping/tmp
as a separate partition at least keeps people from filling up the filesystem with temp files and causing logging and other things to break.Otherwise, take stock of what you want to encrypt. Decide whether you want to have separate encrypted logical volumes so that you can have some encrypted data (eg data backup) not mounted all the time. If having specific programs installed is not what you're hiding, then consider having a specific location (
/home
) encrypted while/
is unencrypted. Of course, you may have more than one location that needs to be encrypted but not want to enter a half dozen passphrases to boot.Personally, what I do is create a (totally not FHS compliant)
/crypt
filesystem, and symlink every directory I want to protect into there. (note: this technique makes it obvious that you have encrypted data and possibly what data it is. Naming your symlink Project_Orion_Nuclear_Spacedrive is probably leaking more information than you'd like.) Planned correctly, the system can even boot unattended and someone can enter the passphrase later. For instance, my database servers and their encrypted drives are set to not automatically mount or run at boot, so the system will boot up enough for me to ssh in, then I can mount the filesystem and start the database server after a reboot.