I resized my logical volume and filesystem and all went smoothly. I installed new kernel and after reboot I can't boot neither current nor former one. I get volume group not found error after selecting grub(2) option. Inspection from busy box reveals the volumes are not registered with device mapper and that they are inactive. I wasn't able to mount them after activating, I got file not found error (mount /dev/mapper/all-root /mnt).
Any ideas how to proceed or make them active at the boot time? Or why the volumes are all of sudden inactive at boot time?
Regards,
Marek
EDIT: Further investigation revealed that this had nothing to do with the resizing of logical volumes. The fact that logical volumes had to be activated manually in ash shell after failed boot and possible solution to this problem is covered in my reply below.
So I managed to solve this eventually. There is a problem (bug) with detecting logical volumes, which is some sort of race condition (maybe in my case regarding the fact that this happens inside KVM). This is covered in the following discussion. In my particular case (Debian Squeeze ) the solution is as follows:
This helped me, hope it'll help others (strangely, this is not part of mainstream yet).
Link to patch: _http://bugs.debian.org/cgi-bin/bugreport.cgi?msg=10;filename=lvm2_wait-lvm.patch;att=1;bug=568838
Below is a copy for posterity.
Create a startup script in
/etc/init.d/lvm
containing the following:Then execute the commands:
Should do the trick for Debian systems.
If
vgscan
"finds" the volumes, you should be able to activate them withvgchange -ay /dev/volumegroupname
I am not sure what would cause them to go inactive after a reboot though.
I had this problem too. In the end this is what seemed to fix it:
Other things I tried:
GRUB_PRELOAD_MODULES="lvm"
GRUB_CMDLINE_LINUX="scsi_mod.scan=sync"
sudo grub-install /dev/sda && sudo grub-install /dev/sdb && sudo update-grub && sudo update-initramfs -u -k all
sudo apt-get install --reinstall lvm2 grub-pc grub-common
I went through and undid the other changes, this is the only one that mattered for me, though it's probably the least elegant.
Without any of the configuration details or error messages we'd need to give an actual answer, I'll take a stab in the dark with
grub-mkdevicemap
as a solution.Assuming you system uses initramfs, there's probably a configuration problem there. You should update your initramfs image that's started at boot time by grub (in Debian you do this with update-initramfs, don't know about other distros).
You could also do this by hand by unpacking initramfs and changing /etc/lvm/lvm.conf (or something like it) in your initramfs image and then repack it again.
I've got the same problem in my environment running Red Hat 7.4 as a KVM guest. I'm running qemu-kvm-1.5.3-141 and virt-manager 1.4.1. At first I was running Red Hat 7.2 as guest without any problem, but after upgrading minor release from 7.2 to 7.4 and kernel to latest version 3.10.0-693.5.2, something went wrong and couldn't boot my /var LV partition any more. The system went to emergency mode asking for root password. Entering with root password and running the commands
lvm vgchange -ay
andsystemctl default
I was able to activate my/var
LV and boot the system.I haven't figured out what causes this issue, but my workaround was to include the LV
/var
in/etc/default/grub
as you see below:GRUB_CMDLINE_LINUX="crashkernel=auto rd.lvm.lv=vg_local/root rd.lvm.lv=vg_local/var rd.lvm.lv=vg_local/swap rhgb quiet biosdevname=0 net.ifnames=0 ipv6.disable=1"
Then I had to run
grub2-mkconfig -o /boot/grub2/grub.cfg
and check if therd.lvm.lv=vg_local/var
was included in vmlinuz line of/boot/grub2/grub.cfg
. After rebooting the system, I didn't get the error for activating my/var
LV anymore and the system completes the boot up process with success.figured out in my case that the grub root was root=/dev/vgname/root
so the test in /usr/share/initramfs-tools/scripts/local-top/lvm2
was always false. and root volume never activated.
updated /etc/fstab from
to
and did:
solved my problem
we ran into this problem and found that disabling
lvmetad
by settinguse_lvmetad=0
in/etc/lvm/lvm.conf
forced the volumes to be found and mae accessible at boot.