I installed ubuntu 20.04 sometime in june.
by now I'm updated to ubuntu 20.04.1 LTS
there's been a few kernel updates. end everytime I run updates Zsys tries to put new kernel version as the first option.
And I'd like nothing more but to use the new kernels.
but they simply won't boot.
when I say won't boot, I mean some kinda freeze happens during the boot process I can't drop to TTY/boot command line and I don't get any error messages (maybe I'm not hitting the right key)
I just have my motherboard logo and that's it.
the ubuntu logo and spinner don't show up and it could stay like this indefinitely if I did nothing.
luckily i still have my old grub entry and with grub customizer I've been keeping it at the top of the list.
but I want to solve this issue now before ubuntu decides I've been hoarding an old kernel for too long.
is there something in particular I'm supposed to do to boot a new kernel with ZFS?
I'm guessing it has nothing to do with the specific kernel version and more to do with zfs being a block that's not yet intricately tied into the rest of the OS. and where ubuntu is able to automatically update the kernel references everywhere else it doesn't yet automatically update a variable inside ZFS that points to the kernel version, or something like that?
I don't know.
Anyways :
t@tsu:~$ dpkg -l | tail -n +6 | grep -E 'linux-image-[0-9]+'
ii linux-image-5.4.0-40-generic 5.4.0-40.44 amd64 Signed kernel image generic
ii linux-image-5.4.0-45-generic 5.4.0-45.49 amd64 Signed kernel image generic
ii linux-image-5.4.0-47-generic 5.4.0-47.51 amd64 Signed kernel image generic
5.4.0-40 boots and is the one my ubuntu 20.04 installed with
5.4.0-45 does not boot and was installed by updates not by me
5.4.0-47 does not boot and was installed by updates not by me
Ideally I'd like to boot 5.4.0-47
EDIT :
On kernel installs I do get an error :
Setting up linux-modules-5.4.48-050448-generic (5.4.48-050448.202006220832) ...
Setting up linux-image-unsigned-5.4.48-050448-generic (5.4.48-050448.202006220832) ...
I: /boot/vmlinuz.old is now a symlink to vmlinuz-5.4.0-47-generic
I: /boot/initrd.img.old is now a symlink to initrd.img-5.4.0-47-generic
I: /boot/vmlinuz is now a symlink to vmlinuz-5.4.48-050448-generic
I: /boot/initrd.img is now a symlink to initrd.img-5.4.48-050448-generic
Processing triggers for linux-image-unsigned-5.4.48-050448-generic (5.4.48-050448.202006220832) ...
/etc/kernel/postinst.d/dkms:
* dkms: running auto installation service for kernel 5.4.48-050448-generic
...done.
/etc/kernel/postinst.d/initramfs-tools:
update-initramfs: Generating /boot/initrd.img-5.4.48-050448-generic
I: The initramfs will attempt to resume from /dev/sda2
I: (UUID=53c19176-f03e-4c40-a6ed-3a2627160647)
I: Set the RESUME variable to override this.
/etc/kernel/postinst.d/zz-update-grub:
Sourcing file `/etc/default/grub'
Sourcing file `/etc/default/grub.d/init-select.cfg'
Generating grub configuration file ...
and a ton of these :
Warning: Couldn't find any valid initrd for dataset rpool/ROOT/ubuntu_38tazy@autozsys_7lfyl1.
and here's update-grub :
(doesn't look optimal)
Looking at your logs I was wondering if Ubuntu has been able to generate the boot files properly, and I was thinking if you have enough space in /boot. Is /boot a partition or is a directory under root fs?. Many default Ubuntu installations had small /boot partitions that got full and create problems. Are you using encryption?.
Another idea is, those Kernels that fail are attempted to install by the updates; can you install one of those kernels manually with apt install?.
Cheers
OK I found out what it was. completely unrelated to kernel version in the end...
As per usual... windows declaring all out war on anything linux.
It's so low of them.
Anyways. Any windows, even installed on a separate computer, so long as you plug in the hard drive into the same motherboard as your linux OS, will attempt to murder grub one way or another.
And the latest entry in these swathes of attack is just to leave grub itself alone but targeting the boot entries and rendering them unbootable.
I was associating this with kernels since grub entries are named with kernel version and only the grub entries before I booted the windows remained intact.
Perhaps they are undetectable to windows, perhaps it has to do with ZFS simply restoring to that state when you choose it, making the attacks inconsequential.
So,
With my previous scuffle with this issue: windows deleting the grub partition whenever windows was booted and updated; my solution was to boot with a Ubuntu live usb stick and run boot repair which would re-create the boot partition with a working grub.
With ZFS, though boot repair is incapable of creating a working boot entry for your ZFS partition.
So the best solution is pre-emptive, it's to use grub-customizer before you ever have a windows partition anywhere near your linux and create duplicate backups of your favorite boot entry so that when it's corrupted one day you can use a duplicate, delete the corrupted, one and make new duplicates.
There's one last issue however. windows somehow permeates it's changes (I'm guessing, by leveraging the BIOS against ubuntu) to any newly created entry.
This means that you aren't able to follow kernel upgrades after the point where you've booted windows once.
Which leaves only my oldest and most cumbersome of solutions :
to power down the system fully and to Unplug any other hard drive when you plug in the windows hard drive.
Just thoughts and maybe worth trying out.
ZFS isn't build in the standard linux kernel, so you have to build it as a module which has to be rebuild for each kernel version. I don't belive that ubuntu comes with zfs modules integrated (maybe it did at some point). Try reinstalling them on your working kernel, this should build modules for the new kernel as well as create a new initrd image.
List the zfs packages you have installed:
use apt reinstall to reinstall them.
apt packages that you might need for zfs: