My instance was running for years and suddenly stopped responding Jun 1st. I tried to reboot it, but it would not boot. It gave errors in the system log: https://pastebin.com/rSxr1kLs
Linux version 2.6.32-642.11.1.el6.x86_64 ([email protected]) (gcc version 4.4.7 20120313 (Red Hat 4.4.7-17) (GCC) ) #1 SMP Fri Nov 18 19:25:05 UTC 2016
Kernel command line: root=/dev/xvde ro LANG=en_US.UTF-8 KEYTABLE=us
VFS: Cannot open root device "xvde" or unknown-block(0,0)
Please append a correct "root=" boot option; here are the available partitions:
Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
I tried to detach the EBS volume and re-attach it as /dev/sda1
according to the documentation: https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/TroubleshootingInstances.html#FilesystemKernel
However, it gave an error Error attaching volume: Invalid value '/dev/sda1' for unixDevice. Attachment point /dev/sda1 is already in use
and I was unable to attach it. I re-attached it as /dev/sda
but it still won't boot and it still gives the error in the system log.
I was able to launch a new instance into the exact same availability zone and attached my EBS volume as /dev/sdf
. It shows up inside the instance as /dev/xvdj
. I mounted it with mount /dev/xvdj /xvdj
. I can see the grub.conf
file:
[root@ip-172-31-4-249 grub]# cat /xvdj/boot/grub/grub.conf
default=0
timeout=1
title CentOS (2.6.32-642.11.1.el6.x86_64)
root (hd0)
kernel /boot/vmlinuz-2.6.32-642.11.1.el6.x86_64 root=/dev/xvde ro crashkernel=auto LANG=en_US.UTF-8 KEYTABLE=us
title CentOS (2.6.32-504.30.3.el6.x86_64)
root (hd0)
kernel /boot/vmlinuz-2.6.32-504.30.3.el6.x86_64 root=/dev/xvde ro crashkernel=auto LANG=en_US.UTF-8 KEYTABLE=us
initrd /boot/initramfs-2.6.32-504.30.3.el6.x86_64.img
title CentOS (2.6.32-504.3.3.el6.x86_64)
root (hd0)
kernel /boot/vmlinuz-2.6.32-504.3.3.el6.x86_64 root=/dev/xvde ro crashkernel=auto LANG=en_US.UTF-8 KEYTABLE=us
initrd /boot/initramfs-2.6.32-504.3.3.el6.x86_64.img
title CentOS (2.6.32-504.el6.x86_64)
root (hd0)
kernel /boot/vmlinuz-2.6.32-504.el6.x86_64 root=/dev/xvde ro crashkernel=auto LANG=en_US.UTF-8 KEYTABLE=us
initrd /boot/initramfs-2.6.32-504.el6.x86_64.img
title CentOS (2.6.32-431.29.2.el6.x86_64)
root (hd0)
kernel /boot/vmlinuz-2.6.32-431.29.2.el6.x86_64 root=/dev/xvde ro crashkernel=auto LANG=en_US.UTF-8 KEYTABLE=us
initrd /boot/initramfs-2.6.32-431.29.2.el6.x86_64.img
title CentOS (2.6.32-431.23.3.el6.x86_64)
root (hd0)
kernel /boot/vmlinuz-2.6.32-431.23.3.el6.x86_64 root=/dev/xvde ro crashkernel=auto LANG=en_US.UTF-8 KEYTABLE=us
initrd /boot/initramfs-2.6.32-431.23.3.el6.x86_64.img
This compares to the grub.conf
of the running instance:
[root@ip-172-31-4-249 grub]# cat /boot/grub/grub.conf
default=0
timeout=1
title CentOS-6-x86_64-20130527-03 2.6.32-358.6.2.el6.x86_64
root (hd0)
kernel /boot/vmlinuz-2.6.32-358.6.2.el6.x86_64 root=/dev/xvde ro
initrd /boot/initramfs-2.6.32-358.6.2.el6.x86_64.img
Does it matter that it doesn't have initrd
line in the first option?
I tried to mount the EBS volume to the new instance with /dev/sda
, but it still wouldn't boot with the same error Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
.
CentOS 6
I created a new instance by going to Images > AMIs > Private Images > Selecting the image the instance was started from > Launch. I launched in exactly the same availability zone, not just US or region, but the 2a, 2b, 2c must match as well. I stopped the new instance. I disconnected the EBS volume from the old instance. I re-attached the EBS volume to the new instance at
/dev/sdf
. I started the new instance. The EBS volume shows up inside the instance as/dev/xvdj
so I mounted it withmkdir /xvdj; mount /dev/xvdj /xvdj
. I edited/xvdj/boot/grub/grub.conf
and changeddefault=0
todefault=1
. I saved the file, stopped the new instance, re-attached the EBS volume to the old instance and it started. I ranyum update
in the old instance and double-checked/boot/grub/grub.conf
and double-checked that it would reboot.I also found this regarding updates to CentOS kernel: grub.conf missing initrd path after kernel update
I noticed after I ran
yum update
I now had 2 entries ingrub.conf
withoutinitrd
. Running# yum reinstall kernel.x86_64
works to fix that.I've had this same issue on several occasions and had to solve it by restoring the instance from EBS snapshot backups. Today I had the same issue and was determined to resolve it without having to restore from backups. I did the following:
mount /dev/xvdh /xvdhmount
)mv /xvdhmount/boot /xvdhmount/boot-backup
I hope this helps!
I had a similar problem with a CentOS instance. This AWS support article gives quite a good overview. Here's how I and managed to solve my problem:
/dev/sda1
disk/dev/sdp
to the new EC2 instance/dev/sdp
to/data
Then I wanted to go back to a previous kernel. The instructions on the CentOS wiki were helpful:
grep "^menuentry" /data/boot/grub2/grub.cfg | cut -d "'" -f2
CentOS Linux (3.10.0-957.12.1.el7.x86_64) 7 (Core)
grub2-set-default --boot-directory /data/boot/ 'CentOS Linux (3.10.0-957.12.1.el7.x86_64) 7 (Core)'
Then shut down the new EC2 instance, detach the volume, attach it back to the original instance (to
/dev/sda1
) and boot the initial instance.Me Too!
The underlying cause was an interrupted
yum upgrade
and a junior staffer doing the work reconnected, and ranyum-complete-transactions
to finish everything.However, something didn't write a file into
/boot/initrd....newver....
which was probably related to the latest kernel entry ingrub2.cfg
missing itsinitrd=/....
line completely.The quick fix was to reattach the boot disk volume to a different instance, mount it, and edit
/mountpoint/etc/grub2.cfg
so that the instance starts up the older version of the kernel. Then re-disconnect and reattach to/dev/sda1
of the original instance.Once you're in again, run
yum reinstall kernel*
to repeat the missing steps, and on completion reboot again to be sure it restarts properly this time and onto the newest kernel.It looks to me like your kernel got upgraded in such a way that it doesn't understand your root filesystem anymore. Your best bet is to create a new node and mount the EBS volume from the old one as a non-root / non boot device, and transfer the critical data over.
I came across similar problem, and it turns out, AWS EC2 defaults differ for launching instance vs. creating an AMI: hardware virtualization (HVM) is the default choice in first case, but paravirtual (PV) is the default for image creation.
I stumbled upon this when tried to move EC2 instance between availability zones by snapshotting its EBS volume and creating a new AMI, and this discrepancy in settings (which I did not pay attention too) wasted an hour for me.
tl;dr: just choose HVM when launching from a customized AMI and your instance should boot fine.