I was using a setup using FCP-disks
-> Multipath
-> LVM
not being mounted anymore after an upgrade from 18.04 to 20.04.
I was seeing these errors at boot - I thought that is ok to sort out duplicates:
May 28 09:00:43 s1lp05 lvm[746]: WARNING: Not using device /dev/sdd1 for PV q1KTMM-fkpM-Ewvm-T4qd-WgO8-hV79-qXpUpb.
May 28 09:00:43 s1lp05 lvm[746]: WARNING: Not using device /dev/sdi1 for PV q1KTMM-fkpM-Ewvm-T4qd-WgO8-hV79-qXpUpb.
May 28 09:00:43 s1lp05 lvm[746]: WARNING: Not using device /dev/sdn1 for PV q1KTMM-fkpM-Ewvm-T4qd-WgO8-hV79-qXpUpb.
May 28 09:00:43 s1lp05 lvm[746]: WARNING: PV q1KTMM-fkpM-Ewvm-T4qd-WgO8-hV79-qXpUpb prefers device /dev/sds1 because device was seen first.
May 28 09:00:43 s1lp05 lvm[746]: WARNING: PV q1KTMM-fkpM-Ewvm-T4qd-WgO8-hV79-qXpUpb prefers device /dev/sds1 because device was seen first.
May 28 09:00:43 s1lp05 lvm[746]: WARNING: PV q1KTMM-fkpM-Ewvm-T4qd-WgO8-hV79-qXpUpb prefers device /dev/sds1 because device was seen first.
But then later auto-activation fails on duplicates
May 28 09:00:56 s1lp05 systemd[1]: Starting LVM event activation on device 253:8...
May 28 09:00:56 s1lp05 lvm[1882]: pvscan[1882] PV /dev/mapper/mpathd-part1 is duplicate for PVID q1KTMMfkpMEwvmT4qdWgO8hV79qXpUpb on 253:8 and 8:49.
May 28 09:00:57 s1lp05 systemd[1]: lvm2-pvscan@253:8.service: Main process exited, code=exited, status=5/NOTINSTALLED
May 28 09:00:57 s1lp05 systemd[1]: lvm2-pvscan@253:8.service: Failed with result 'exit-code'.
May 28 09:00:57 s1lp05 systemd[1]: Failed to start LVM event activation on device 253:8.
And finally giving up:
May 28 09:02:12 s1lp05 systemd[1]: dev-vgdisks-lv_tmp.device: Job dev-vgdisks-lv_tmp.device/start timed out.
lvdisplay
reported the devices then as LV Status NOT available
It seems LVM now scans more (or the kernel presents more) devices. I didn't have these issues on 18.04.
I have found the solution and want to document it here for everyone else that might face the same.
Realizing that the root cause was the duplicate device I thought I'd filter the others out. So I added these to
/etc/lvm/lvm.conf
(the second line since the bug triggered with pvscan/lvmmetad)I knew this is needed at boot, so to get that config into boot time initrframfs I was running:
Yet while so far it seemed to be a normal "need to update config after upgrade case" (which still would be worth to know about) things didn't work out.
Interestingly I found a
/etc/lvm/lvm.conf.dpkg-dist
file from the upgrade and remembered that I wanted to keep my old one. But this was the problem. After restoring the packages conffile and adding the two lines above again things started to work.I haven't kept my old
lvm.conf
to analyze the case in detail, but the Lessons-learned for me for lvm.conf is to save my old file and carry any custom config into the new default file (instead of keeping the old config).I hope this helps someone else as well.