Tried
multipath -v2 -d
so I can find laptop's integrated card reader wwid
and blacklist it (it's /dev/sdc), but nothing returned, just a new line
Any ideas how to stop sdc: can't store path info
to fill logs?
I'm booting Ubuntu server 20.04 from a single nvme disk. Since a couple of days ago, multipath has been spamming my syslog with these messages:
Jun 11 20:10:14 xb multipath: nvme0n1: failed to get udev uid: Invalid argument
Jun 11 20:10:14 xb multipath: nvme0n1: uid = eui.0000000001000000e4d25c7f117c5001 (sysfs)
Jun 11 20:10:17 xb multipath: nvme0n1: failed to get udev uid: Invalid argument
Jun 11 20:10:17 xb multipath: nvme0n1: uid = eui.0000000001000000e4d25c7f117c5001 (sysfs)
Jun 11 20:10:20 xb multipath: nvme0n1: failed to get udev uid: Invalid argument
Jun 11 20:10:20 xb multipath: nvme0n1: uid = eui.0000000001000000e4d25c7f117c5001 (sysfs)
Jun 11 20:11:34 xb multipath: nvme0n1: failed to get udev uid: Invalid argument
Jun 11 20:11:34 xb multipath: nvme0n1: uid = eui.0000000001000000e4d25c7f117c5001 (sysfs)
Jun 11 20:11:37 xb multipath: nvme0n1: failed to get udev uid: Invalid argument
Jun 11 20:11:37 xb multipath: nvme0n1: uid = eui.0000000001000000e4d25c7f117c5001 (sysfs)
I can get the following list of devices to confirm it monitors my nvme disk:
multipathd> show devices
available block devices:
loop1 devnode blacklisted, unmonitored
nvme0n1 devnode whitelisted, monitored
loop6 devnode blacklisted, unmonitored
loop4 devnode blacklisted, unmonitored
loop2 devnode blacklisted, unmonitored
loop0 devnode blacklisted, unmonitored
loop7 devnode blacklisted, unmonitored
loop5 devnode blacklisted, unmonitored
loop3 devnode blacklisted, unmonitored
multipathd>
I tried the command:
remove map nvme0n1
But got the message: fail
So now the question is, how can I get it to stop monitoring my nvme disk and spamming the syslog?
I was using a setup using FCP-disks
-> Multipath
-> LVM
not being mounted anymore after an upgrade from 18.04 to 20.04.
I was seeing these errors at boot - I thought that is ok to sort out duplicates:
May 28 09:00:43 s1lp05 lvm[746]: WARNING: Not using device /dev/sdd1 for PV q1KTMM-fkpM-Ewvm-T4qd-WgO8-hV79-qXpUpb.
May 28 09:00:43 s1lp05 lvm[746]: WARNING: Not using device /dev/sdi1 for PV q1KTMM-fkpM-Ewvm-T4qd-WgO8-hV79-qXpUpb.
May 28 09:00:43 s1lp05 lvm[746]: WARNING: Not using device /dev/sdn1 for PV q1KTMM-fkpM-Ewvm-T4qd-WgO8-hV79-qXpUpb.
May 28 09:00:43 s1lp05 lvm[746]: WARNING: PV q1KTMM-fkpM-Ewvm-T4qd-WgO8-hV79-qXpUpb prefers device /dev/sds1 because device was seen first.
May 28 09:00:43 s1lp05 lvm[746]: WARNING: PV q1KTMM-fkpM-Ewvm-T4qd-WgO8-hV79-qXpUpb prefers device /dev/sds1 because device was seen first.
May 28 09:00:43 s1lp05 lvm[746]: WARNING: PV q1KTMM-fkpM-Ewvm-T4qd-WgO8-hV79-qXpUpb prefers device /dev/sds1 because device was seen first.
But then later auto-activation fails on duplicates
May 28 09:00:56 s1lp05 systemd[1]: Starting LVM event activation on device 253:8...
May 28 09:00:56 s1lp05 lvm[1882]: pvscan[1882] PV /dev/mapper/mpathd-part1 is duplicate for PVID q1KTMMfkpMEwvmT4qdWgO8hV79qXpUpb on 253:8 and 8:49.
May 28 09:00:57 s1lp05 systemd[1]: lvm2-pvscan@253:8.service: Main process exited, code=exited, status=5/NOTINSTALLED
May 28 09:00:57 s1lp05 systemd[1]: lvm2-pvscan@253:8.service: Failed with result 'exit-code'.
May 28 09:00:57 s1lp05 systemd[1]: Failed to start LVM event activation on device 253:8.
And finally giving up:
May 28 09:02:12 s1lp05 systemd[1]: dev-vgdisks-lv_tmp.device: Job dev-vgdisks-lv_tmp.device/start timed out.
lvdisplay
reported the devices then as LV Status NOT available
It seems LVM now scans more (or the kernel presents more) devices. I didn't have these issues on 18.04.
I just installed a new Ubuntu 20.04 server as a virtual machine on an esx-Server.
When I look into systemlog
I see lots of multipath entries.
multipathd[651]: sda: add missing path
multipathd[651]: sda: failed to get udev uid: Invalid argument
multipathd[651]: sda: failed to get sysfs uid: Invalid argument
multipathd[651]: sda: failed to get sgio uid: No such file or directory
multipathd[651]: sda: add missing path
multipathd[651]: sda: failed to get udev uid: Invalid argument
multipathd[651]: sda: failed to get sysfs uid: Invalid argument
multipathd[651]: sda: failed to get sgio uid: No such file or directory
I think multipath is just not configured and my question is if I can disable multipath. Since I checked this on several Ubuntu 20.04 servers multipath is enabled by default.
Does it make sense to activate multipath?
I am trying to use multipath for the first time. And even that I have read the documentation both here at Ubuntu and on Redhat, I'm not coming closer to a solution.
I have 4 disks
What I'm trying to setup is this
mpatha
/dev/sdb
/dev/sdc
mpathb
/dev/sdb
/dev/sdc
But for now I get this
sdb 259:0 0 1.1T 0 disk
ââmpatha 253:1 0 1.1T 0 mpath
sdc 259:1 0 1.1T 0 disk
ââmpathb 253:2 0 1.1T 0 mpath
sdd 259:2 0 1.1T 0 disk
ââmpathd 253:3 0 1.1T 0 mpath
sde 259:3 0 1.1T 0 disk
My config looks like this:
defaults {
user_friendly_names yes
find_multipaths no
max_fds 32
uid_attribute ID_WWN
path_checker directio
}
blacklist_exceptions {
property "(ID_WWN|SCSI_IDENT_.*|ID_SERIAL|DEVTYPE)"
devnode "sd*"
But how to specify which drives goes to which MultiPath groups?