I increased the size of the partition which I'm using as a LVM PV, but running pvresize doesn't seem to see the extra space:
cuttle:~# fdisk -l /dev/vda
Disk /dev/vda: 21.5 GB, 21474836480 bytes
255 heads, 63 sectors/track, 2610 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00027dbb
Device Boot Start End Blocks Id System
/dev/vda1 * 1 31 248976 83 Linux
/dev/vda2 32 2610 20715817+ 8e Linux LVM
Which says that vda2 is about 20gigs
cuttle:~# pvdisplay
--- Physical volume ---
PV Name /dev/vda2
VG Name debian
PV Size 4.76 GiB / not usable 3.08 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 1217
Free PE 0
Allocated PE 1217
PV UUID tehZic-5vfN-rsrm-B8lN-lpgc-yQT1-ioH1V0
So currently the pv is about 4-5 gigs
cuttle:~# pvresize -v /dev/vda2
Using physical volume(s) on command line
Archiving volume group "debian" metadata (seqno 12).
No change to size of physical volume /dev/vda2.
Resizing volume "/dev/vda2" to 9975981 sectors.
Updating physical volume "/dev/vda2"
Creating volume group backup "/etc/lvm/backup/debian" (seqno 13).
Physical volume "/dev/vda2" changed
1 physical volume(s) resized / 0 physical volume(s) not resized
pvresize says "No change to size of physical volume /dev/vda2"
cuttle:~# pvdisplay
--- Physical volume ---
PV Name /dev/vda2
VG Name debian
PV Size 4.76 GiB / not usable 3.08 MiB
Allocatable yes (but full)
PE Size 4.00 MiB
Total PE 1217
Free PE 0
Allocated PE 1217
PV UUID tehZic-5vfN-rsrm-B8lN-lpgc-yQT1-ioH1V0
and the size of the pv hasn't changed.
Not sure what else I might do to use the extra space. I suppose I could resize the partition to the size of the pv, then add a second partition, but it really seems to me that what I'm trying to do here should work.
partprobe /dev/vda
In my case, I had a volume of 60GB, and I extended it to 110GB.
After resizing the disk from AWS console, then when running
df -kh
, the system shows the new size of the disk, as expected:In a normal case, the next step would be expanding the physical volume,
/dev/nvme3n1p1
, but the commandpvresize
did not reflect the new extra space as expected.After investigating the issue, it looks like this is a Kernel issue, that normally the new Kernels would detect this extra space automatically. I saw that some, advise rebooting the instance, but this also did not work.
To solve this issue, we need to run the
growpart
on the target disk partition.Then, we can see the changes reflected:
Then we need to check the LVM path:
Tell LVM to extend the logical volume to use all of the new partition size:
Finally, we extend the filesystem:
You need to add space first from storage level, and then increase the space on ISCSI device by executing:
I was running into this issue on a CentOS 7 guest system. In my case I had increased the ZFS ZVOL size and didn't see any change in the guest and the pvresize would not change it. I ended up booting into SystemRescueCD 4.4.0 and used "parted" with the resizepart command. In CentOS I had parted 3.1, and this command was not available. Looks like parted 3.2 is in SysRescCD now, which worked.
After boot into the sysresc iso, run parted /dev/ and use the following as an example :
resizepart 2 37.6G
Where 2 is the partition number, and desired new larger size was 37.6G.
After that, while I was still in the boot iso, I ran the pvresize and it worked correctly. Reboot into the VM (or your system) and all looked good from there. :) Hope that helps!
You have to first extend the partitions size using fdisk or cfdisk. Only after that it becomes available to pvresize.