While trying to recover my ec2 instance, I noticed I could not mount the root volume of that instance on other machine without generating a new UUID using
xfs_admin -U generate /dev/xdfg
.
(This is due to a the system saying it couldn't mount the drive due to having a duplicate UUID, I still don't know why it said that)
This allowed me to access the volume. However, when attempting to mount it back and boot on the original ec2 instance the boot fails and produced a
unknown filesystem
error and prompted to use grub rescue.
To resolve this, I mounted the drive back on the secondary machine and changed its UUID back to it's original, luckily I had it in my console history.
xfs_admin -U xxxxxxxx-xxxx-Mxxx-Nxxx-xxxxxxxxxxxx /dev/xdfg
This allowed me to boot back into the machine.
Question
So for the sake of curiosity what about the UUID prevents the system to boot? When mounted on a separate machine with both UUID, the system knew that the filesystem was xfs
.