I have done multiple researchs before to open this post. I did find a lot of post about it but none was really link to my configuration so I open this here. I have 2 hdd in RAID 1:
Disque /dev/sdj : 558,9 GiB, 600127266816 octets, 1172123568 secteurs
Unités : sectors of 1 * 512 = 512 octets
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 5C9A14EB-BD49-435D-A136-62086235D780
Périphérique Start Fin Secteurs Size Type
/dev/sdj1 2048 1172121599 1172119552 558,9G Linux filesystem
Disque /dev/sdk : 931,5 GiB, 1000204886016 octets, 1953525168 secteurs
Unités : sectors of 1 * 512 = 512 octets
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 3DCFB2AE-DABD-4B10-96AF-DB389F943DE5
Périphérique Start Fin Secteurs Size Type
/dev/sdk1 2048 1171875839 1171873792 558,8G Linux filesystem
/dev/sdk2 1171875840 1953523711 781647872 372,7G Linux filesystem
sdj1 + sdk1 = RAID1
I'm adding 2 other disks with this procedure from this [tuto]: https://ubuntuforums.org/showthread.php?t=713936 for the partitionning:
`sfdisk /dev/sdd < partitions.sdb`
It has worked before for another RAID on the same host. So I end up with :
Disque /dev/sdg : 931,5 GiB, 1000204886016 octets, 1953525168 secteurs
Unités : sectors of 1 * 512 = 512 octets
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 3DCFB2AE-DABD-4B10-96AF-DB389F943DE5
Périphérique Start Fin Secteurs Size Type
/dev/sdg1 2048 1171875839 1171873792 558,8G Linux filesystem
/dev/sdg2 1171875840 1953523711 781647872 372,7G Linux filesystem
Disque /dev/sdi : 931,5 GiB, 1000204886016 octets, 1953525168 secteurs
Unités : sectors of 1 * 512 = 512 octets
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 3DCFB2AE-DABD-4B10-96AF-DB389F943DE5
Périphérique Start Fin Secteurs Size Type
/dev/sdi1 2048 1171875839 1171873792 558,8G Linux filesystem
/dev/sdi2 1171875840 1953523711 781647872 372,7G Linux filesystem
and so :
Personalities : [raid1] [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid10]
md125 : inactive sdn[0]
976224256 blocks super external:/md126/0
md126 : inactive sdn[0](S)
538328 blocks super external:ddf
md227 : active raid6 sdl[7] sdh1[6] sde1[5] sdf1[4] sdd1[3] sdc1[0] sdb1[1]
9766912000 blocks super 1.2 level 6, 512k chunk, algorithm 18 [7/6] [UUUUUU_]
[=========>...........] reshape = 46.8% (915172352/1953382400) finish=3557.0min speed=4864K/sec
bitmap: 2/15 pages [8KB], 65536KB chunk
md127 : active raid1 sdg1[3](S) sdk1[2](S) sdj1[0] sdi1[1]
585805824 blocks super 1.2 [2/2] [UU]
bitmap: 4/5 pages [16KB], 65536KB chunk
unused devices: <none>
md127 is the raid1 as you see I'm growing a raid5 in a raid6 at the same time.
Why have I this: mdadm: Impossibly level change request for RAID1
with this : sudo mdadm --grow /dev/md127 --level=10 --raid-devices=4 --backup-file=/root/raid1backup
because of the growing of the RAID6? or the partitionning is not good? or is it because the raid array is mounted and busy with docker container?
It happens to be finally very simple because of my backup file. I hop it will help guys like me who didn't find enough documentation about how to restart a grow after a clean reboot:
it should restore the work automatically you can verify it with
and
cat /proc/mdstat