I used a whole device as LVM physical partition, just so
sudo pvcreate /dev/xvdg
Unfortunately, while this was in use, I then accidentally overwrote some data (I think), by writing a new partition table:
sudo fdisk /dev/xvdg
, add new partition, write partition table, delete partition, write empty partition table
This is where I am currently at. Everything still looks to be working, but I am afraid of restart, unmount, etc...
- Is it broken?
- If yes, what is the best way to fix it?
Thanks!
Assuming you were using the whole disk as the lvm pv, rather than an individual partition within it, it should generally be just fine since the LVM header is not in the first sector, where the partition table is, especially when using 512-byte sectors.
The partition table is in the first sector: See for example here: Hard disks can be divided into one or more logical disks called partitions. This division is recorded in the partition table, found in sector 0 of the disk.
The LVM header is by default in the second sector: See for example here: By default, the LVM label is placed in the second 512-byte sector. You can overwrite this default by placing the label on any of the first 4 sectors. This allows LVM volumes to co-exist with other users of these sectors, if necessary.
Beware: I am unsure what happens if the sector size fdisk uses is larger, say 1024-bytes - LVM might still be in the second 512-bytes sector, and fdisk might overwrite the whole 1024-byte sector?
As an aside: If you are unsure and have access to additional space (e.g. on Amazon EC2), you could always create a volume of identical size, do a pvcreate on it, add it to the volumegroup, use a pvmove to move the data to the new volume, and then a vgreduce to remove the affected volume.
Yeah, in 99.99% cases, it is broken. Reason being you have overwritten the partition table. The metadata of lvm resides in the second 512 byte sector of the PV. So, during the new partition creation, if you have touched those sectors then your metadata has been wiped out. Essentially a restart, umount will screw things up.
There are two possible (yet, might not be feasible) to hacks.
1) If you know the exact partition table of the last known good filesystem, you can run fdisk and try to create it in the same exact order. You have to know in which sectors the old fs used to start and end. Create the partition as before and it might work out.
2) If things don't work out this way then there is another workaround of pvcreate. Your last known lvm backup will be stored in
/etc/lvm/archive/volume_group_name_XXXX.vg
file. You need to get the UUID of the PV from there. Then, if things are your favour you can do this.But if you can, please backup your data first. pvcreate doesn't touch user data, it only deals with metadata but if at boot time, fsck finds any inconsistency it can throw you out with fs errors and potentially un-recoverable disk.