I realize there's been several questions by people who've had issues booting already, but I think mine is a rather particular case, so I'm posting yet another question in hopes of addressing some new issues.
I've been repairing the boot process of a VM that had an initramfs
(initrd.img
and vmlinuz
files in /boot
) from kernels that were no longer installed, and was trying to still boot from them.
I am very close to being finished, but it keeps rebooting into systemd
's emergency mode
(where it says: )
You are in emergency mode. After logging in, type "journalctl -xb" to view system logs, "systemctl reboot" to reboot, "systemctl default" or "exit" to boot into default mode.
Give root password for maintenance
(or press Control-D to continue):
I booted from a live CD, mounted the 3 pertinent partitions to /mnt
, chrooted to /mnt
:
mount /dev/sda3 /mnt
mount /dev/sda2 /mnt/boot
mount /dev/sda1 /mnt/boot/efi
for i in proc dev dev/pts sys tmp run; do mount --bind /$i /mnt/$i; done
chroot /mnt
Did my repairs and rebooted.
Now my fstab
is not mounting my partitions. I thought it was correctly configured - UUIDs are copied directly from blkid | grep /dev/sda
. I didn't think it was missing anything.
Here are the errors I'm seeing right before getting to the emergency mode prompt:
[FAILED] Failed to mount /boot
See 'systemctl status boot.mount' for details.
[DEPEND] Dependency failed for Local File Systems
[DEPEND] Dependency failed for Unattended Upgrades Shutdown
[DEPEND] Dependency failed for /boot/efi
So, of course I looked at systemctl status boot.mount
, but it's active (green) and says it's loaded, even though my /boot
folder is empty unless I manually mount /dev/sda2
.
Seems very strange. Why would boot.mount
say it's loading /boot
partition if it's clearly not?
So I actually figured out the issue while I was writing the question. As you can see from what I wrote in the beginning, it was a very long process (I had been working on it for about 2 days before I got to the point of wanting to ask for help).
If you look at the very end of the Q, I had received this message from
dmesg
during the boot process:So, of course I tried
systemctl status boot.mount
to see what it said, but it saidboot.mount
is active (green), it's loaded and functioning properly, even though/boot
was empty unless I manually mounted/dev/sda2
(which is exactly the opposite of what I would expect).So I started thinking something might be wrong with the service. I disabled
boot.mount
even though it said it was working properly:I tried to re-enable it, but got an error:
OK, that makes sense, it's triggered through the boot process and cannot be invoked through a user command. So I tried to re-mount all devices with:
And saw that there was an error in the
/etc/fstab
file:(or something to that effect).
The key here is, if I hadn't tried mounting the filesystem manually, I would have never received that feedback. The error message from
mount -a
one gets whenfstab
contains improper syntax is incredibly helpful. A lot more helpful than:... and then seeing a "working" systemd unit for
boot.mount
when/boot
is not mounting (even though it did get me to the right place eventually).So I edited the
fstab
and entered the filesystem info for the/boot
partition that failed to mount, then I re-ranmount -a
(which essentially does the same thing asboot.mount
) and got a positive response.Now the two partitions are mounting properly after a reboot, and all is good in the land of horseradish and marmalade.
If this does not address any of your issues, here are some additional notes of the process I went through before getting to the point above where I was looking for help (feel free to stop reading after you get to your problem):
The original issue I was having two days ago was the system trying to boot from kernels no longer on the system. So, after booting with the live CD, I deleted the
/boot
folder's contents (where all theinitrd
files are located).I figured I would just re-create the
initramfs
usingupdate-initramfs -c -k all
from the current kernels I had installed, but then I learned that I could not re-create theconfig
orSystem.map
files withdepmod
alone. This turned out to be a little more troublesome than I had bargained for.I found the easiest way to re-generate or acquire all these files is to:
/boot
,linux-image
,linux-header
andlinux-modules
files I had no intention of using,/usr/lib/modules
, and thenlinux-image
,linux-modules
andlinux-headers
files I intended on using (the most current generic two versions)Note: Re-installing these 3 types of files all at the same time was how I managed to get the
/boot/System.map
and/boot/config
files back - before only re-installing thelinux-image
files did not do it. It's possible they're included withmodules
(modules would make sense), orheaders
packages, but this is what worked for me.update-grub
after re-installing those files and confirming/boot
was populated correctly.bootctl install
and/etc/kernel/postinst.d/zz-udpate-systemd-boot
, so I would havesystemd-boot
installed as a fallback.At one point after a reboot, I had to re-configure
system.target
tomulti-user.target
instead ofgraphical.target
, probably due to havingchroot
ed with all those mounts in a graphical live CD to run theboot-repair
program a couple days ago, which requires graphics (and I believe/dev/pts
/tmp
and/run
were required to getdisplay :0.0
to work):Ok that's about it. Hope this helps someone.