mdadm
does not seem to support growing an array from level 1 to level 10.
I have two disks in RAID 1. I want to add two new disks and convert the array to a four disk RAID 10 array.
My current strategy:
- Make good backup.
- Create a degraded 4 disk RAID 10 array with two missing disks.
rsync
the RAID 1 array with the RAID 10 array.
one disk from the RAID 1 array.fail
and remove- Add the available disk to the RAID 10 array and wait for resynch to complete.
- Destroy the RAID 1 array and add the last disk to the RAID 10 array.
The problem is the lack of redundancy at step 5.
Is there a better way?
With linux softraid you can make a RAID 10 array with only two disks.
Device names used below:
md0
is the old array of type/level RAID1.md1
is the new array of type/level RAID10.sda1
andsdb2
are new, empty partitions (without data).sda2
andsdc1
are old partitions (with crucial data).Replace names to fit your use case. Use e.g.
lsblk
to view your current layout.0) Backup, Backup, Backup, Backup oh and BACKUP
1) Create the new array (4 devices: 2 existing, 2 missing):
Note that in this example layout
sda1
has a missing counterpart andsdb2
has another missing counterpart. Your data onmd1
is not safe at this point (effectively it is RAID0 until you add missing members).To view layout and other details of created array use:
Note! You should save the layout of the array:
2) Format and mount. The
/dev/md1
should be immediately usable, but need to be formatted and then mounted.3) Copy files. Use e.g. rsync to copy data from old RAID 1 to the new RAID 10. (this is only an example command, read the man pages for rsync)
4) Fail 1st part of the old RAID1 (md0), and add it to the new RAID10 (md1)
Note! This will wipe out data from
sda2
. Themd0
should still be usable but only if the other raid member was fully operational.Also note that this will begin syncing/recovery processes on
md1
. To check status use one of below commands:Wait until recovery is finished.
5) Install GRUB on the new Array (Assuming you're booting from it). Some Linux rescue/boot CD works best.
6) Boot on new array. IF IT WORKED CORRECTLY Destroy old array and add the remaining disk to the new array.
POINT OF NO RETURN
At this point you will destroy data on the last member of the old md0 array. Be absolutely sure everything is working.
And again - wait until recovery on
md1
is finished.7) Update mdadm config
Remember to update
/etc/mdadm/mdadm.conf
(remove md0).And save config to initramfs (to be available after reboot)
Follow the same procedure as Mark Turner but when you create the raid array, mention 2 missing disks
And then proceed with other steps.
In short, create RAID10 with total 4 disks(out of which 2 are missing), resync, add other two disks after that.
Just finished going from LVM on two 2TB disk mdadm RAID 1 to LVM on a four disk RAID 10 (two original + two new disks).
As @aditsu noted the drive order is important when creating the array.
Code above gives a usable array with two missing disks (add partition numbers if you aren't using whole disks). As soon as the third disk is added it will begin to sync. I added the fourth disk before the third finished syncing. It showed as a spare until the third disk finished then it started syncing.
Steps for my situation:
Make good backup.
Create a degraded 4 disk RAID 10 array with two missing disks (we will call the missing disks #2 and 4).
Tell wife not to change/add any files she cares about
Fail and remove one disk from the RAID 1 array (disk 4).
Move physical extents from the RAID 1 array to the RAID 10 array leaving disk 2 empty.
Kill the active RAID 1 array, add that now empty disk (disk 2) to the RAID 10 array, and wait for resync to complete.
Add the first disk removed from RAID 1 (disk 4) to the RAID 10 array.
Give wife go ahead.
At step 7 I think drive 1, 2, OR 4 can fail (during resync of disk 4) without killing the array. If drive 3 fails the data on the array is toast.
I did it with LVM. Initial configuration: - sda2, sdb2 - and created raid1 md1 on top. sda1 and sdb1 were used for second raid1 for /boot partition. - md1 was pv in volume group space, with some lvm's on it.
I've added disks sdc and sdd and created there partitions like on sda/sdb.
So:
created md10 as:
mdadm --create /dev/md10 --level raid10 --raid-devices=4 /dev/sdc2 missing /dev/sdd2
extend vg on it:
pvcreate /dev/md10 vgextend space /dev/md10
moved volumes from md1 to md10:
pvmove -v /dev/md1 /dev/md10
(wait for done) 4. reduce volume group:
stop array md1:
mdadm -S /dev/md1
add disks from old md1 to md10:
mdadm -a /dev/md10 /dev/sda2 /dev/sdb2
update configuration in /etc/mdadm/mdadm.conf:
mdadm -E --scan >>/dev/mdadm/mdadm.conf
(and remove there old md1)
Everything done on live system, with active volumes used for kvm's ;)
I have moved my raid1 to raid10 now and while this page helped me but there are some things missing in the answers above. Especially my aim was to keep ext4 birthtimes.
the setup was:
as anyone stated before: the zero step should be backup and there can allways go something wrong in the process resulting in extreme dataloss
BACKUP
setup of the new raid
create a new raid
(i found that the layout is important .. the 2nd and 4th seem to be the duplicates in a default 'near' raid )
migrate
now getting the data over ... i was first trying to use rsync wich worked but failed to keep the birthtime ... use dd to clone from the old raid to the new one
WAIT FOR IT
you can check with sending USR1 to that process
fix the raid
gparted is a great tool: you tell it to check&fix the partition and resize it to the full size of that disk with just a few mouseclicks ;)
set a new uuid to that partition and update your fstab with it (change uuid)
store your raid in conf
and remove the old one
destroying the old one
fail the first one and add it to the new raid
then make gpt on that device and set a new empty partition
WAIT FOR IT
you can check with
stop the second one
then make gpt on that last device and set a new empty partition again
WAIT FOR IT again