I have a system with 3 SSD devices (/dev/sda
, /dev/sdb
, /dev/sdc
) which contain a single LVM logical volume that span over all the devices. I have a single ext4 partition on the logical volume.
I think that one of the SSD devices (/dev/sdb
) might be somewhat faulty and have reduced performance compared to other devices.
Is there a command to get the list of files that are backed by that device?
I know that I can get list of logical segments with sudo pvdisplay -m
and the output looks like following:
--- Physical volume ---
PV Name /dev/sda
VG Name storage
PV Size <1,82 TiB / not usable <1,09 MiB
Allocatable yes (but full)
PE Size 4,00 MiB
Total PE 476932
Free PE 0
Allocated PE 476932
PV UUID h3x3O1-1KWj-3pY6-kZ24-MVV4-54UE-ltEdfA
--- Physical Segments ---
Physical extent 0 to 476931:
Logical volume /dev/storage/vm
Logical extents 0 to 476931
--- Physical volume ---
PV Name /dev/sdb
VG Name storage
PV Size <3,64 TiB / not usable <3,84 MiB
Allocatable yes (but full)
PE Size 4,00 MiB
Total PE 953861
Free PE 0
Allocated PE 953861
PV UUID MsNlhh-W2It-CbX4-IxJn-lXJN-hlcd-EpBh9Q
--- Physical Segments ---
Physical extent 0 to 953860:
Logical volume /dev/storage/vm
Logical extents 476932 to 1430792
--- Physical volume ---
PV Name /dev/sdc
VG Name storage
PV Size <3,64 TiB / not usable <3,84 MiB
Allocatable yes (but full)
PE Size 4,00 MiB
Total PE 953861
Free PE 0
Allocated PE 953861
PV UUID sklK6w-XZd6-DqIp-ZT1g-O9rj-1ufw-UaC0z4
--- Physical Segments ---
Physical extent 0 to 953860:
Logical volume /dev/storage/vm
Logical extents 1430793 to 2384653
So I know that logical extends 476932 to 1430792 are the potentially problematic area. How to map this logical segment range to actual files on the filesystem (ext4) on top of the LVM?
Basically I'm trying to figure out if the device is actually faulty or if the usage pattern for these files might be unlucky enough that I'm hitting usage pattern that's problematic for the hardware and the performance is worse than expected. No device is showing any errors and all the data looks good, but the performance of this single device appears to be worse than expected.
The system is in use so I'd prefer to diagnose this online without overwriting any data. I know that if I could simply take the potentially problematic storage device offline and overwrite its contents, I could use fio
to benchmark it to see if it's working below spec or not.
$ lsblk -s
...
storage-vm 253:0 0 9,1T 0 lvm /mnt/storage
├─sda 8:0 0 1,8T 0 disk
├─sdb 8:16 0 3,7T 0 disk
└─sdc 8:32 0 3,7T 0 disk
I'm basically asking how to get list of files backed by a single storage device when the filesystem spans over multiple storage devices.
Or if you can provide instructions how to figure where a given file is actually stored, that would be fine, too. I would then run that routine for every file to figure out which files are backed by the device I'm interested in. I'm aware that it could be that a single big file is backed by all devices if the file is fragmented over a big range of locigal segments so the answer could be that a single file is backed by all the devices but I have currently no idea how to do this either.
$ sudo vgdisplay
--- Volume group ---
VG Name storage
System ID
Format lvm2
Metadata Areas 3
Metadata Sequence No 6
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 1
Max PV 0
Cur PV 3
Act PV 3
VG Size <9,10 TiB
PE Size 4,00 MiB
Total PE 2384654
Alloc PE / Size 2384654 / <9,10 TiB
Free PE / Size 0 / 0
VG UUID MOrTMY-5Dly-48uQ-9Fa8-JNvf-tont-9in7ol
$ sudo lvdisplay
--- Logical volume ---
LV Path /dev/storage/vm
LV Name vm
VG Name storage
LV UUID RDkaLH-mh6C-cXxT-6ojc-DxkB-o4jD-3CMHdl
LV Write Access read/write
LV Creation host, time staging, 2021-01-21 09:57:06 +0200
LV Status available
# open 1
LV Size <9,10 TiB
Current LE 2384654
Segments 3
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0