I just upgraded from Ubuntu 16.04 to 20.04. In my computer I have 4 X 1TB HHD in a raid 10 array. They are sda, sdb, sdc, and sdd = md0 This raided drive contains my home directory, which in my case is /home/joe/ I had a line in fstab:
UUID=f7790191-84f3-4d9b-81b8-43de132244a2 /home ext4 defaults 0 0
Linux Ubuntu is mounted on a solid state drive, dev/nvme0n1.
This was the output of blkid under 16.04 :
/dev/nvme0n1p1: UUID="1c7b2d4e-d543-4e71-8b05-569a0993e339" TYPE="ext4" PARTUUID="3acfb4f5-01"
/dev/nvme0n1p5: UUID="2403be72-9dca-43b6-a596-044cfd813801" TYPE="swap" PARTUUID="3acfb4f5-05"
/dev/sda: UUID="43468a60-e0d2-6202-4e0c-320120beeee1" UUID_SUB="a49f1c1a-3450-39bc-8efb-67da1ebeacdf" LABEL="joeslinux:0" TYPE="linux_raid_member"
/dev/sdb: UUID="43468a60-e0d2-6202-4e0c-320120beeee1" UUID_SUB="11eb3ea8-74da-18c2-cd0a-bb2454c0cb46" LABEL="joeslinux:0" TYPE="linux_raid_member"
/dev/sdc: UUID="43468a60-e0d2-6202-4e0c-320120beeee1" UUID_SUB="e5b7cd63-974c-b4ed-8061-fc1c405abb08" LABEL="joeslinux:0" TYPE="linux_raid_member"
/dev/sdd1: UUID="82374f03-c484-4420-8693-6ed0a7704b4e" TYPE="ext4" PARTUUID="049ace7e-01"
/dev/sde: UUID="74D8-8B03" TYPE="vfat"
/dev/nvme0n1: PTUUID="3acfb4f5" PTTYPE="dos"
Then I upgraded from Ubuntu 16.04 to 20.04. The upgrade died in the middle, something to do with Python 3 basic packages need to be removed. However, I was left with an inoperable computer, so I did a clean install of 20.04 on that SSD, nvme01n. While doing so, I physically disconnected the 4 hdd drives, sda through sdd. I did that because the installer implied it wanted to use my sda as the install device and I couldn’t take the chance of letting that happen.
Now I have a clean, successful installation of 20.04. I presume all my work in my home directory is still in tact. However I cannot access it at this time. I do not have my md0 drive back.
Here is my new fdisk -l:
Disk /dev/nvme0n1: 238.49 GiB, 256060514304 bytes, 500118192 sectors
Disk model: SAMSUNG MZVPW256HEGL-000H1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 233AD7D4-8BEC-407D-8096-A2C7BEA37CB7
Device Start End Sectors Size Type
/dev/nvme0n1p1 2048 1050623 1048576 512M EFI System
/dev/nvme0n1p2 1050624 500117503 499066880 238G Linux filesystem
Disk /dev/sda: 931.53 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: HGST HTS721010A9
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x000680a0
Disk /dev/sdb: 931.53 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: HGST HTS721010A9
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xd92e9edf
Device Boot Start End Sectors Size Id Type
/dev/sdb1 2048 1953525167 1953523120 931.5G 83 Linux
Disk /dev/sdc: 931.53 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000DM010-2EP1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x76221e63
Disk /dev/sdd: 931.53 GiB, 1000204886016 bytes, 1953525168 sectors
Disk model: ST1000DM010-2EP1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0xee260f95
Disk /dev/sdf: 1.84 TiB, 2000398934016 bytes, 3907029168 sectors
Disk model: PSZ-HB
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x049ace7e
Device Boot Start End Sectors Size Id Type
/dev/sdf1 2048 3907029167 3907027120 1.8T 83 Linux
(Loop0 through 9 omitted from the above.)
I know sda sdb sdc and sdd to be the correct size. I also see a new sdf of the same size as what md0 is supposed to be, 1.84TiB. That is consistent with reality so I know everything is there.
Currently my blkid shows only 1 line:
/dev/nvme0n1p2: UUID="ebb08f84-7501-4461-a5c4-c69c3c25d9b0" TYPE="ext4" PARTUUID="3ba58f92-47c7-423a-aad0-459ffe32cae1"
This is just the SSD with the operating system. Suspiciously, neither the UUID nor the PARTUUID are the same as what they were on the old 16.04. That leads me to worry that the UUID of the raided md0 has also changed.
Anyway, I need advice as to how to reconstruct the raid md0 and re-point my home folder. I don’t dare activate the line
UUID=f7790191-84f3-4d9b-81b8-43de132244a2 /home ext4 defaults 0 0
in the new fstab and reboot.
And I don’t dare fiddle around with mdadm without understanding exactly what I’m doing.
Thank you in advance for any help or suggestions,
I just ran mdadm as below.
Only took seconds. And it worked the first time.
To keep the disk mounted I added this to /etc/fstab