I have an Ubuntu 18.04 LTS Server with two 3TB Fujitsu SATA disks with MBR partition tables. Output from fdisk -l and lsblk is shown below :
sudo fdisk -l /dev/sd[ab]
Disk /dev/sda: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x0f6b5c4d
Device Boot Start End Sectors Size Id Type
/dev/sda1 2048 1953525167 1953523120 931,5G fd Linux RAID autodetect
/dev/sda2 1953525760 4294967294 2341441535 1,1T fd Linux RAID autodetect
Disk /dev/sdb: 2,7 TiB, 3000592982016 bytes, 5860533168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x0f6b5c4d
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 1953525167 1953523120 931,5G fd Linux RAID autodetect
/dev/sdb2 1953525760 4294967294 2341441535 1,1T fd Linux RAID autodetect
and
jan@xenon:/etc/grub.d$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 2,7T 0 disk
├─sda1 8:1 0 931,5G 0 part
│ └─md0 9:0 0 931,4G 0 raid1
│ ├─lvraid0-boot 253:0 0 952M 0 lvm /boot
│ ├─lvraid0-root 253:1 0 9,3G 0 lvm /
│ ├─lvraid0-usr 253:2 0 41,4G 0 lvm /usr
│ ├─lvraid0-var 253:3 0 27,1G 0 lvm /var
│ ├─lvraid0-local 253:4 0 20G 0 lvm /usr/local
│ └─lvraid0-home 253:5 0 1,3T 0 lvm /home
└─sda2 8:2 0 1,1T 0 part
└─md1 9:1 0 1,1T 0 raid1
└─lvraid0-home 253:5 0 1,3T 0 lvm /home
sdb 8:16 0 2,7T 0 disk
├─sdb1 8:17 0 931,5G 0 part
│ └─md0 9:0 0 931,4G 0 raid1
│ ├─lvraid0-boot 253:0 0 952M 0 lvm /boot
│ ├─lvraid0-root 253:1 0 9,3G 0 lvm /
│ ├─lvraid0-usr 253:2 0 41,4G 0 lvm /usr
│ ├─lvraid0-var 253:3 0 27,1G 0 lvm /var
│ ├─lvraid0-local 253:4 0 20G 0 lvm /usr/local
│ └─lvraid0-home 253:5 0 1,3T 0 lvm /home
└─sdb2 8:18 0 1,1T 0 part
└─md1 9:1 0 1,1T 0 raid1
└─lvraid0-home 253:5 0 1,3T 0 lvm /home
sdc 8:32 0 238,5G 0 disk
├─sdc1 8:33 0 32G 0 part [SWAP]
└─sdc2 8:34 0 206,5G 0 part
And mdadm settings are :
sudo mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Sat Mar 8 12:55:25 2014
Raid Level : raid1
Array Size : 976630488 (931.39 GiB 1000.07 GB)
Used Dev Size : 976630488 (931.39 GiB 1000.07 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sun Mar 7 16:39:14 2021
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : xenon:0 (local to host xenon)
UUID : 862908d9:73a53975:471031c2:2e8d79fd
Events : 1429528
Number Major Minor RaidDevice State
2 8 1 0 active sync /dev/sda1
3 8 17 1 active sync /dev/sdb1
sudo mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Wed Apr 29 23:17:46 2020
Raid Level : raid1
Array Size : 1170588608 (1116.36 GiB 1198.68 GB)
Used Dev Size : 1170588608 (1116.36 GiB 1198.68 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent
Update Time : Sun Mar 7 16:27:26 2021
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0
Consistency Policy : resync
Name : xenon:1 (local to host xenon)
UUID : df9fa048:c1e13ee2:fc85098e:0292f063
Events : 2010
Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 18 1 active sync /dev/sdb2
As seen above each disk have two partiotions which are used to make two software RAID 1 volumes md0 and md1, which in turn are used by LVM for volume group lvraid0. This all works fine but the disk layout waste a lot of space because of the use of MBR partition table which prevents me from using the full size of the disk. The reason is that I originally had two 1 TB disks which I subsequently replaced with the 3 TB disks. I should alos mention that my server is using a BIOS (not UEFI)
In order to use the entire disk space I'd like to change the disk partioning from MBR to GPT. I I have researched how to do this and it seems to be possible using a method something like the following, but I would like some input from someone who has done something similar with the specific commands so I dont end up with a failed installation:
- Remove one drive /dev/sdb from the RAID volumes md0 and md1
mdadm --manage /dev/md0 --fail /dev/sdb1
mdadm --manage /dev/md1 --fail /dev/sdb2
mdadm --manage /dev/md0 --remove /dev/sdb1
mdadm --manage /dev/md1 --remove /dev/sdb2
Should I zero out contents of /dev/sdb partitions to avoid problems with mdadm device scans ?
dd if=/dev/zero of=/dev/sdb1 bs=1M count=1000
dd if=/dev/zero of=/dev/sdb2 bs=1M count=1000
If I don't do this I fear some mdadm or lvm2 scanning may find leftovers - please specify relevant commands
- Re-partition /dev/sdb to use GPT
I would like to partition the drive something like this:
a. one partition /dev/sdb1 for md0 like on the MBR disk
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 1953525167 1953523120 931,5G fd Linux RAID autodetect
b. one partition /dev/sdb2 to be used by GRUB2, what size 10 MB ?
c. one partion /dev/sdb3 to be used for possible future UEFI - 100MB ?
d. one partition /dev/sdb4 for md1 bigger than the original md1
e. leave som space for future use to be used by MDADM/LVM2
Which GPT codes should I use for the new partitions for RAID autodect, bootable, GRUB, etc. I know they are different from MBR disks but dont know which to use
Re-adding /dev/sdb to the two RAID volumes
Do I need to remove RAID write intent bitmaps and if yes how and when, and how/when to re-create ?
mdadm -–manage /dev/md0 -–add /dev/sdb1
mdadm --manage /dev/md1 --add /dev/sdb4
now await re-synchronization to finish
- Install grub2 on the newly created partiton
will grub-install /dev/sdb work as normally ? I guess it will, if I used the right code for the dedicated grub partition. As I understand it there should be a protective boot record at LBA sector 0 and the rest in the GRUB partition. Will this ensure the disk will be bootable ?
I guess I should now update my mdadm configuration:
mdadm --detail --scan > /etc/mdadm/mdadm.conf
and update the boot environment:
update-initramfs -u -k all
update-grub
/etc/fstab should be good since UUIDs are LVM2 UUIDs
and verify the system can boot off of the new GPT disk before going any further to change the other MBR disk /dev/sda
- Resizing of volumes
I assume this would happen after both drives have been changed from MBR to GPT
I have kept the size of /dev/md0 as original MBR disk so that should be oK ?
I suspect I need to tell mdadm that /dev/md1, which now reside on a much bigger partition /dev/sdb3 than the original MBR disk need growing. How do I do that ?
mdadm –grow /dev/md1 –size=max ??
Once that is done I guess I also need to tell LVM2 to use the now bigger physical volume /dev/md1. How do I do that ?
pvresize /dev/md1 ??
And I guess I also need to resize /home, residing on /dev/md1 to use the increased space. I use ext4 filesystems so I guess I should use some
resize2fs /dev/lvraid0/home ??
Hope someone can help with this ? I found some guide on the internet in the past but cant seem to find it anymore :-(