I recently reinstalled my Ubuntu server with 10.04, and am having trouble reactivating the LVM partition which houses all my non-critical data. /dev/sda
, /dev/sdb
, /dev/sdc
, and /dev/sdd
are all assembled into a Raid-5 array /dev/md0
, upon which sits a single VG media
with LG part1
(I think; I used to be able to manually mount it with vgchange -ay media && mount /dev/mapper/media-part1
). My issue is currently, I cannot get the system to detect the VG and activate it. Not 20 minutes ago I had it working just fine (I had encountered this issue on the previous startup, but failed to write down what steps I took to actually get the VG activated when I last booted the system).
The raid assembles just fine and is clean, but I cannot get the VG to show up or mount. pvck /dev/md0
displays:
Device /dev/md0 not found (or ignored by filtering).
The filter in my /etc/lvm/lvm.conf
is filter = [ "a/.*/" ]
pvck /dev/md0p1
displays:
Could not find LVM label on /dev/md0p1
pvdisplay
produces the following output:
Setting global/locking_type to 1 Setting global/wait_for_locks to 1 File-based locking selected. Setting global/locking_dir to /var/lock/lvm Locking /var/lock/lvm/P_global RB Scanning for physical volume names /dev/ram0: No label detected /dev/md0p1: Label for sector 1 found at sector 0 - ignoring /dev/md0p1: No label detected /dev/ram1: No label detected /dev/sda1: Label for sector 1 found at sector 0 - ignoring /dev/sda1: No label detected /dev/ram2: No label detected /dev/ram3: No label detected /dev/ram4: No label detected /dev/ram5: No label detected /dev/ram6: No label detected /dev/ram7: No label detected /dev/ram8: No label detected /dev/ram9: No label detected /dev/ram10: No label detected /dev/ram11: No label detected /dev/ram12: No label detected /dev/ram13: No label detected /dev/ram14: No label detected /dev/ram15: No label detected /dev/sdb1: No label detected /dev/sde1: No label detected /dev/sdf1: No label detected /dev/sdg1: No label detected /dev/sdh1: No label detected /dev/sdi1: No label detected /dev/root: No label detected /dev/sdj3: No label detected /dev/sdj4: No label detected /dev/sdj5: No label detected /dev/sdj6: No label detected Unlocking /var/lock/lvm/P_global
/dev/m0p1
is where the PV should be stored, but it's not showing up. Sadly, I do not have my /etc/lvm/backup
directory from my previous installation.
I'm pretty sure the data is all there, I just need to know
a) How can I force lvm to search the /dev/md0
device for the volume group, and
b) How can I fix this so that the system will detect and activate the volume group on startup (the RAID array already assembles on startup).
I'm not sure I fully understand how exactly the LVM sits upon the physical devices, so if I seem confused in my terminology, please correct it. (PVs are physical devices, a VG sits atop one or more PVs, and there are one or more LV in a VG, somewhat like partitions in a conventional harddisk?)
My current lvm.conf (as provided by lvm dumpconf
) is:
devices { dir="/dev" scan="/dev/disk" preferred_names=[] filter="a/.*/" cache_dir="/etc/lvm/cache" cache_file_prefix="" write_cache_state=1 sysfs_scan=1 md_component_detection=1 md_chunk_alignment=1 data_alignment_detection=1 data_alignment=0 data_alignment_offset_detection=1 ignore_suspended_devices=0 } dmeventd { mirror_library="libdevmapper-event-lvm2mirror.so" snapshot_library="libdevmapper-event-lvm2snapshot.so" } activation { udev_sync=1 missing_stripe_filler="error" reserved_stack=256 reserved_memory=8192 process_priority=-18 mirror_region_size=512 readahead="auto" mirror_log_fault_policy="allocate" mirror_device_fault_policy="remove" } global { umask=63 test=0 units="h" si_unit_consistency=1 activation=1 proc="/proc" locking_type=1 wait_for_locks=1 fallback_to_clustered_locking=1 fallback_to_local_locking=1 locking_dir="/var/lock/lvm" prioritise_write_locks=1 } shell { history_size=100 } backup { backup=1 backup_dir="/etc/lvm/backup" archive=1 archive_dir="/etc/lvm/archive" retain_min=10 retain_days=30 } log { verbose=0 syslog=1 overwrite=0 level=0 indent=1 command_names=0 prefix=" " }
EDIT: It seems the LVM refuses to scan the MD devices.
dandroid@tinuvael:/etc/lvm$ sudo vgcfgrestore --test --verbose media Test mode: Metadata will NOT be updated. Wiping cache of LVM-capable devices Couldn't find device with uuid 'iTmyql-LYQv-N1GD-6aM0-BHco-uHEe-taHhBI'. Cannot restore Volume Group media with 1 PVs marked as missing. Restore failed. Test mode: Wiping internal cache Wiping internal VG cache
dandroid@tinuvael:/etc/lvm$ sudo blkid
/dev/md0: UUID="iTmyql-LYQv-N1GD-6aM0-BHco-uHEe-taHhBI" TYPE="LVM2_member"
/dev/sdc: UUID="b81c877f-6542-d03b-4e08-ceb6032f5cfe" TYPE="linux_raid_member"
/dev/sdd: UUID="b81c877f-6542-d03b-4e08-ceb6032f5cfe" TYPE="linux_raid_member"
I have trimmed several irrelevant lines, but the above shows that the raid array is mounted and has the UUID that LVM is searching for, but it still continues to ignore the device after adding types = [ "md", 16 ]
to my configuration.