I have been trying to reduce the size of my Amazon Linux 1 AMI root volume using the procedure in this documentation (with some modifications made after failing to do so) and continuously run into errors with the step:
$ sudo grub-install --root-directory=/mnt/new-volume/ --force /dev/xvdf
This is legacy GRUB (Version 0.97-94.32.amzn1
)
I was getting the following error at first:
Unrecognized option `--force'
and as a result removed the --force
flag and jus tused:
$ sudo grub-install --root-directory=/mnt/new-volume/ /dev/xvdf
which has since resulted in:
/dev/xvdf does not have any corresponding BIOS drive
I have tried to create the BIOS boot partition using parted or fdisk following instructions mentioned in this thread but every method has led to the same failure. Please note that the specific instance type I am using (r5.large) renames the drives to corresponding "nvme*" names as noted in the lsblk output:
nvme0n1 259:3 0 200G 0 disk
├─nvme0n1p1 259:4 0 200G 0 part /
└─nvme0n1p128 259:5 0 1M 0 part
nvme1n1 259:0 0 40G 0 disk
├─nvme1n1p2 259:2 0 40G 0 part /mnt/new-volume
└─nvme1n1p1 259:1 0 1M 0 part
One article relevant to the error message was found in this Linux Questions post but this did not prove to resolve my issue. I've tried through chroot-ing into the partition and ran into the same issue and have tried using an intermediary Amazon Linux 1 or Amazon Linux 2 host but continue running into the issue.
I do note that this same issue occurs when using the root volume alone in Amazon Linux 1:
grub-install /dev/sda OR grub-install /dev/sda1
But regardless the new disk cannot be booted from unless it is listed as the secondary drive. Using the grub
command alone from the Legacy GRUB manual to install has failed as well. Am I looking into the wrong procedure to create a new smaller root volume or is there something I am missing from the steps above? Can provide further details as necessary.
Followed same manual and that what I think made it work:
On Ubuntu 20 in
/boot/grub/grub.cfg
had wrong uuid, so I needed to fix here:/etc/default/grub.d/40-force-partuuid.cfg
and then re-generate new/boot/grub/grub.cfg
withgrub-mkconfig -o ...
Additionally I partitioned new EBS, volume, what seems you did too:
Not sure if it was necessary, though.
You have to specify the correct block device as you are using nvme device instead of xvdf
I found a workaround for Amazon Linux 1 by doing the following in the meantime but would still be open to further inspection.
Launch a new instance using the same AMI but changing the root volume size to the desired amount.
Stop the new instance, detach the smaller EBS volume, and attach it to the current instance where the larger root volume is attached (in the stopped state).
Start the current instance (now with the smaller EBS volume attached as a secondary drive).
Use the following to copy over the contents of the root volume (assuming that it is mounted to
/mnt/new-volume
:$ rsync -axv / /mnt/new-volume
Stop the current instance, detach both volumes.
Attach the smaller new root volume to the instance.
Start the instance
It is not as elegant of a workaround but did suffice as it is not clear how the original root volume is created and booted from.