We have a debian linux server which has a single 120GB SSD and 2x2TB HDDs as RAID 1. We now have to move to another server which has 2x240GB SSD and a single 2TB HDD. The aim is to move the current 120GB system drive SSD to a 240GB RAID1 SSD and move the data from the current 2TB RAID1 HDD to the single 2TB HDD.
Moving the 2TB wont be a problem, so I'm focusing my question to the system drive SSD. Our current setup is somehow complicated. The 120GB SSD has the following partitions:
fdisk -l /dev/sda
Disk /dev/sda: 120.0 GB, 120034123776 bytes
255 heads, 63 sectors/track, 14593 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Device Boot Start End Blocks Id System
/dev/sda1 * 1 66 523264 83 Linux
Partition 1 does not end on cylinder boundary.
/dev/sda2 66 14594 116696064 83 Linux
and fstab tells us:
cat /etc/fstab
# /etc/fstab: static file system information.
#
# <file system> <mount point> <type> <options> <dump> <pass>
proc /proc proc defaults 0 0
/dev/mapper/vgdebian-root / ext4 errors=remount-ro 0 1
/dev/mapper/vgdebian-swap none swap sw 0 0
/dev/sda1 /boot ext3 defaults 0 2
So the boot stuff with kernel and a busybox linux with dropbear is all on /dev/sda1. Then dropbear helps to decrypt the /dev/sda2 partition, which is enrypted by cryptsetup and managed by LVM.
I'm not sure which is the best way to move all of /dev/sda to a newly created SDD-RAID1? Should I first make a dd-copy to one of the new disks, enlarge the /dev/sda2 partition (each of the new SSDs is now 240GB instead of 120GB) and make dropbear aware of the new 240GB SSD? Should I then start copying all of the first 240GB SDD to the second one and init a mdadm create array command?
Or should I create a clean /dev/md0 array on the new 240GB SSDs and then copying the whole old drive to this maybe /dev/md0 named device?
How would dropbear/busybox react on the RAID? It might be that the new /dev/sda1 has to be copied to /dev/sdb1 so that dropbear/busybox can be bootet from both new SSDs. The RAID1 would exists first when booting the decrypted LVM debian - am I right?
Maybe someone can give me some hints if it is possible to move such a enrypted system anyway. Thanks for any help.
Edit: I transfered all 120GB of our old SSD to one of the new 240GB SSD of the new server via dd command (tut here: https://library.linode.com/migration/ssh-copy). Then I changed some config to dropbear and reassembled initramfs and rebootet - the system works as usual on new server.
Now I needed to resize the image of the old SSD so I enlarged /dev/sda2 to maximum, then I enlarged physical volume, logical volume and later the file system. I rebooted and everthing works fine (tut here: http://ubuntuforums.org/showthread.php?p=4530641).
Last thing: moving the whole stuff from single SSD to RAID1. Any hints someone?
Edit2: Currently I'm trying to get the RAID1 thing running. After the old system works on one of the 240GB SSDs there are two tutorials I found for migrating from a non-RAID system to RAID1: pug.org/mediawiki/index.php/Vorhandenes_MD-RAID1_im_laufenden_Betrieb_verschl%C3%BCsseln and howtoforge.com/software-raid1-grub-boot-debian-etch-p2. Both work from a running system. I use the first tut for the LUKs stuff and the second one for the rest - hope it will work out.
Ok, I got it now running!
After I had the old SSD copied via dd over to one of the new SSDs, resized it to the new 240GB size as described above, I just had the problem to init the RAID1. I found the two guides mentioned in my Edit2 post but actually it was this tutorial:
http://www.howtoforge.com/how-to-set-up-software-raid1-on-a-running-lvm-system-incl-grub2-configuration-debian-squeeze-p3
So I had to setup a new RAID1 with missing second drives (see tut) as follows:
Actually the /dev/md1 is holding the cryptsetup luks container, which then contains the whole LVM stuff and on that we have the actual file system:
After creating md0/md1 I copied sda1 with the working /boot onto md0. Then I created new cryptsetup luks stuff on md1. The I extended my existing VGroup onto md1 and moved the Volume Group to /dev/md1 using pvmove (this is all LVM stuff).
I needed to do some chroot'ing for reinstalling/updating grub2, but this might be just in my case. Afterward I deleted the whole /dev/sda and added it to mdadm, so it got nicely resynched.
Reboot worked after 2-3 tries, the whole thing is running now again - after 12h of work :)