On my intranet server, I have a 100.00 GiB partition /dev/sda5, which I use as a physical volume for lvm2.
- It's the only physical volume in my volume group vg01.
- vg01 currently contains one logical volume lv01, using the full 100.00 GiB - well, actually 99.99 GiB due to some rounding (that's where the problem starts).
- lv01 contains an ext3 file system, using the entire space.
I want to reduce lv01 to approximately 97 GiB, so I can create lv02 with approx. 3 GiB (I need it to take lvm snapshots).
What I did so far:
e2fsck -f /dev/mapper/vg01-lv01
resize2fs /dev/mapper/vg01-lv01 97G
This has worked well. But now I'll have to run
lvreduce --size ? /dev/mapper/vg01-lv01
And I'm not sure, which exact value I'll have to specify. The lvreduce
man page warns explicitly, that the resulting size must not be smaller than the file system. I also don't want to make it larger than it has to be. But now I have different numbers:
- I specified
97G
in resize2fs. df -h
says, it's 96 G.df
says, it's 100115936 1K-blocks.- lvdisplay (of course) still reports 99.99 GiB for the logical volume.
What will I have to specify for lvreduce
?
Edit:
The currently accepted answer provides a nice workaround. However, in order to integrate such things in solid scripts etc., I would generally prefer to use precise measurements instead. Or maybe there's already a reliable (!) script or tool which performs the entire resize procedure in one step?
I think this is best done using the --resizefs option to lvreduce/lvresize:
Admittedly, that doesn't help you now, but it may in the future.
In my experience, LVM and resize2fs have the same ideas about what "97G" means, so specifying the same size in both places should be fine. However, I'm paranoid, and wherever possible always use the strategy suggested by larsks in the question comments and resized to be one GB smaller than I wanted, done the lvresize to the size I wanted, and then re-run resize2fs (without a size) to let it expand back out to fill the whole LV.