I am trying to backup/restore an LVM volume on a local Ubuntu 20.04 (Focal Fossa) server. The production server has an LV of 500 GB, however there are only 19 GB used so far.
The local development server has 24 GB free space where I intend to restore the 19 GB from production. I do use -L 24G
as a parameter while doing the LVM snapshot. The process fails to restore with the message: "no space left on device":
Production Server:
sudo lvcreate -s /dev/vg0/test -n backup_test -L 24G
sudo dd if=/dev/vg0/backup_test | lz4 > test_lvm.ddimg.lz4
1048576000+0 records in
1048576000+0 records out
536870912000 bytes (537 GB, 500 GiB) copied, 967.79 s, 555 MB/s
sudo lvdisplay /dev/vg0/backup_test
--- Logical volume ---
LV Path /dev/vg0/backup_test
LV Name backup_test
VG Name vg0
LV UUID IsGBmM-VM7C-2sO4-VrC1-kHKg-EzcR-4Hej44
LV Write Access read/write
LV Creation host, time leo, 2021-07-04 12:45:45 +0200
LV snapshot status active destination for m360
LV Status available
# open 0
LV Size 500.00 GiB
Current LE 128000
COW-table size 24.00 GiB
COW-table LE 6144
Allocated to snapshot 0.01%
Snapshot chunk size 4.00 KiB
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:10
Local Test Server:
sudo lvcreate -n restore -L 24.5G data
sudo mkfs.ext4 /dev/data/restore
sudo lz4 -d test_lvm.ddimg.lz4 | sudo dd of=/dev/data/restore
Warning : using stdout as default output. Do not rely on this behavior: use explicit `-c` instead !
dd: writing to '/dev/data/restore': No space left on device
51380225+0 records in
51380224+0 records out
26306674688 bytes (26 GB, 24 GiB) copied, 1181.48 s, 22.3 MB/s
Is there a way to restore the 500 GB volume containing only 19 GB on a smaller disk than 500 GB?
The data you are trying to restore has a size of 500 GB, and that will not fit on a logical volume of size 24 GB. You can be more selective about what you're transferring, or you can allocate more space to the target volume group.
It sounds like the data on the logical volume you're backing up is only 19 GB. There are a few ways you can transfer that data.
If you only care about the data and not the filesystem, you could create a new filesystem (via
mkfs.ext4
or so) on the target, and use tools that operate on files instead of block devices, like rsync(1) or tar(1).Alternatively, you could use fsarchiver(8), which can back up a filesystem, and then restore it to a smaller device.
Finally, you could shrink the size of the block device you're backing up with resize2fs(8) and lvreduce(8). You can then back up that smaller volume, and restore to a similarly sized volume on the new server.
The only way that you could restore your data is to use a block device that is compressed (e.g VDO, loop-device over a compressed file system like btrfs and ZFS).
However even if your data is only 17GB, the free space might contain data from previous writes and the entire LV might not be that compressible.
As compressed filesystems you can use btrfs or ZFS, and create a 500G zeroed file that you map to a loop device (with
losetup
), or, on ZFS, you can create directly a block device that is compressed. Then restore your LV to the block device you created in previous step. Compression should give you a chance to restore all data.The concept of (un)used bytes doesn't exist on LVM's level. Whether a byte (or actually a sector) contains meaningful data or unused garbage is determined by the filesystem that lives inside the LV. The LVM doesn't know what a filesystem is. All it does is it takes a bunch of disks and combines them logically according to your instructions. It doesn't care what you will do with that combined volume.
This means that you're taking a snapshot of 500 GB of data. LVM doesn't understand which parts of this snapshot are meaningful and worth preserving and which are not.
What you want to achieve is possible at the filesystem level by imaging. Some software, like the free partclone, understands filesystems' structures and can create an image - essentially a sparse file that contains only parts of the filesystem that are in use. The gotcha here is that the target device has to be at least the same size as the source, because partclone doesn't adjust filesystem's geometry: all stored pieces must go into their original locations when the image is being restored.
So the plan would be as follows:
A variant of this would be to resize the filesystem to 25 GB and then send over the LVM snapshot. It will error out with "no space left on device" because you're still sending entire 500 GB , but that's okay, because only the first 25 GB of the volume will contain the filesystem.
Anyway, you'll probably have to adjust the fstab if these are bootable volumes, so be prepared for that.
Two points to address.
Creating a 24G LVM snapshot, means that the snapshot will survive 24G of changes between the original volume and the snapshot volume. The snapshot device itself will appear as if it is the same size as the original volume. Even if you never make changes to the snapshot volume, any changes to the original volume will have to recorded as changes between the original volume and the snapshot, and this will eat up from the 24G size of the snapshot.
To do what you are trying to do, here are a few ideas:
on the test server:
on the source server:
rsync
that:Then do what you did.
You must use the
lvconvert
command with the--mergesnapshot
option and the name of the logical volume snapshot to restore an LVM logical volume. When the--mergesnapshot
option is used, the snapshot is merged into the original logical volume and then erased.