I somehow messed up some thinly provisioned lvm volumes on ubuntu 14.04 and now I want to start fresh by deleting the thinpool with all its volumes and data inside. Unfortunatelly this failes and I can't find a solution.
The logical volumes look like this:
user@server1:~$ sudo lvs
dm_report_object: report function failed for field data_percent
LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert
project2 vg0 Vwi-i-tz- 22.00g mythinpool
project1 vg0 Vwi---tz- 20.00g mythinpool
project3 vg0 Vwi---tz- 21.00g mythinpool
home vg0 -wi-ao--- 140.00g
mythinpool vg0 twi-i-tz- 78.82g 52.15
root vg0 -wi-ao--- 10.00g
swap vg0 -wi-ao--- 4.00g
tmp vg0 -wi-ao--- 5.00g
Now I want to remove the thinpool with the three lvms inside:
sudo lvremove /dev/vg0/mythinpool
Removing pool mythinpool will also remove 3 thin volume(s). OK? [y/n]: y
Do you really want to remove and DISCARD logical volume project1? [y/n]: y
device-mapper: message ioctl on failed: Invalid argument
Unable to deactivate open vg0-mythinpool_tdata (252:5)
Unable to deactivate open vg0-mythinpool_tmeta (252:4)
Failed to deactivate vg0-mythinpool-tpool
Failed to resume mythinpool.
Failed to update thin pool mythinpool.
I don't care about the data inside mythinpool, but the rest of the volume group vg0 MUST stay intact. How could I solve this problem? Thank you for any help on this.
EDIT 1: After following the answer from shodanshok I was able to remove one LVM-Image by booting into CentOS7, but unfortunatelly the other two volumes including thin pool return another error message - transaction_id mismatch:
There is also no space available for lvconvert --repair
I finaly solved it very with some simple steps as described here: remove corrupt LVM thin pool
After following the advice from @shodanshok to boot into Live CentOS7 sticked to the server on an USB stick, I was able to issue the described commands and to eventually get rid of the corrupt thin pool without damaging the root file system that resides inside the same volume group.
Thank you everybody for your helpful advice that led eventually to the solution.
Something is keeping your thin volumes opened. Please do the following:
lsof | grep mountpoint
to find the offending processes. Kill them and try to unmount the filesystemsEDIT:
As you can't use a live image and your rescue system has no thin-volume support, we can try an alternate route. Basically, we will set the "skip activation flag" on your thin volumes/pool and reboot the machine. Follow these steps:
lvchange -ky vg0/project1 ; lvchange -ky vg0/project2 ; lvchange -ky vg0/project3 ; lvchange -ky vg0/mythinpool
lvremove
However, if any of these volumes are needed for machine boot you will end with an unbootable machine. Be sure to have a "plan B" to restore the machine via a recovery console or such.
EDIT 2
If your system does not support the
-k
flag, you can try usinglvchange -aay volumename
and reboot. This will set the volume for autoactivation, which only works for volumes specified on/etc/lvm.conf