I've just installed Devuan Excalibur using mostly the OpenZFS instructions for Debian Trixie, as I did before except with Chimaera (which I then upgraded to Daedalus.) I had a grand old time getting it to boot and started over a couple of times, which is not a big deal because it does not take very long to get to the point where it doesn't boot. I had been using legacy booting and chose to try UEFI, which is working okay now. I had to clean the MBRs of both of my mirror volumes in order to get it to work, though. (Overwriting the first 512 bytes will do it, or I think 448 bytes if you don't want to trash the partition table... but I just started over from scratch.)
In order to get set up, I booted the Devuan Excalibur Desktop ISO in a kvm/qemu virtual machine I created using virt-manager, and installed to a USB stick which I attached as the storage device. I have a couple of other disks in this machine which I use for experiments, and I wrote the ISO to the stick and then installed Devuan with XFCE to that first, then booted the machine from that, and did the rest of the install from there as per the instructions above.
Long story short, if you're here frustrated because you're not booting, this one-liner will fix what's wrong with the grub.cfg:
sudo sed -i 's#root=ZFS=/ROOT/debian ##' /boot/grub/grub.cfgOK, so why is this additional root= info being added to my boot stanza? Tracked it down to /etc/grub/10_linux:
case x"$GRUB_FS" in
xbtrfs)
rootsubvol="`make_system_path_relative_to_its_root /`"
rootsubvol="${rootsubvol#/}"
if [ "x${rootsubvol}" != x ]; then
GRUB_CMDLINE_LINUX="rootflags=subvol=${rootsubvol} ${GRUB_CMDLINE_LINUX}"
fi;;
xzfs)
rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2>/dev/null || true`
bootfs="`make_system_path_relative_to_its_root / | sed -e "s,@$,,"`"
LINUX_ROOT_DEVICE="ZFS=${rpool}${bootfs%/}"
;;
esacI went on a fun detour through /usr/share/grub/grub-mkconfig_lib where I found out that "grub_probe" is grub-probe and "make_system_path_relative_to_its_root" is grub-mkrelpath, thanks for that, grub folks.
$ grub-mkrelpath / | sed -e "s,@$,,"
/ROOT/debianWhatever $GRUB_DEVICE is, grub-probe doesn't report that the fs_label is "rpool", which is the name of the ZFS pool that contains the boot volume. With a little help I figured out how I could successfully chop this out with a script written into /etc/grub.d, which I called 90_zfs, and which contains:
sed -i 's#root=ZFS=/ROOT/debian ##' /boot/grub/grub.cfg.newBecause the grub.cfg file there is overwritten with that .cfg.new file right after these scripts are run. This is a nice safety mechanism, because if the sed command fails, that won't happen and the old config file will remain in place. Usually we are keeping at least one prior kernel around so this is a good strategy for having a booting system.
OK, another hack I did which I don't know if it was completely necessary was to create an init script to sub in for a unit file suggested in the instructions in step 4.13. Here's the relevant parts of the .service file:
Before=zfs-import-scan.service
Before=zfs-import-cache.service
ExecStart=/sbin/zpool import -N -o cachefile=none bpool
# Work-around to preserve zpool cache:
ExecStartPre=-/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache
ExecStartPost=-/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cacheOK, so we need to run zpool import bpool with some options after first saving and before then restoring a cache file, weird but I'm taking this on faith as it's from the official source. So I went to go look at the zfs init scripts in /etc/init.d and found zfs-import and hacked and whacked it down into the following script which I called zfs-bpool:
#!/bin/sh
# SPDX-License-Identifier: BSD-2-Clause
# shellcheck disable=SC2154
#
# zfs-bpool This script will import ZFS pools
#
# chkconfig: 2345 01 99
# description: imports bpool by force during system boot.
# probe: true
#
### BEGIN INIT INFO
# Provides: zfs-bpool
# Required-Start: mtab
# Required-Stop: $local_fs mtab
# Default-Start: S
# Default-Stop: 0 1 6
# X-Start-Before: zfs-import
# Short-Description: Import ZFS bpool
# Description: Import ZFS bpool by force before zfs-import runs
### END INIT INFO
#
# NOTE: Not having '$local_fs' on Required-Start but only on Required-Stop
# is on purpose. If we have '$local_fs' in both (and X-Start-Before=checkfs)
# we get conflicts - import needs to be started extremely early,
# but not stopped too late.
#
# Released under the 2-clause BSD license.
#
# This script is based on debian/zfsutils.zfs.init from the
# Debian GNU/kFreeBSD zfsutils 8.1-3 package, written by Aurelien Jarno.
# Source the common init script
. /etc/zfs/zfs-functions
# ----------------------------------------------------
if true
then
case "$1" in
start)
/bin/mv /etc/zfs/zpool.cache /etc/zfs/preboot_zpool.cache
/sbin/zpool import -N -o cachefile=none bpool
/bin/mv /etc/zfs/preboot_zpool.cache /etc/zfs/zpool.cache
;;
stop)
# no-op
;;
status)
do_status
;;
force-reload|condrestart|reload|restart)
# no-op
;;
*)
[ -n "$1" ] && echo "Error: Unknown command $1."
echo "Usage: $0 {start|status}"
exit 3
;;
esac
exit $?
else
# Create wrapper functions since Gentoo don't use the case part.
depend() { do_depend; }
start() { do_start; }
status() { do_status; }
fiMade this executable, then added it to the boot sequence:
sudo update-rc.d zfs-bpool defaultsThere were really no surprises other than this in getting this working for me. I think that these are really the best options for solving these problems, for more or less the same reason. For the grub.cfg file problem, I'd have to rewrite and replace a script and have to figure out how to best arrange that, and then if that script was modified I'd need to make changes. But the regexp I used to fix the problem only removes a very specific string that should never be in the file, so worst case it does nothing and doesn't solve the problem if that file should change later. On the other hand, creating a new init script is absolutely the best way to solve the problem of needing to run some commands at a specific place in the boot order which is related to other init scripts. I could have edited /etc/init.d/zfs-import, but again then I get into the position of potentially having problems if it changes.
Other than this, the only surprise I had during this process was that some problem with sddm was causing KDE sessions to fail, logging out after a delay with a black screen. I've switched to slim, and as I do not care much what the login screen looks like, I may find this satisfying.
Let's see, editing /etc/grub…
drink
Let's see, editing /etc/grub.d/10_linux to output GRUB_DEVICE to a text file shows
So then...
uh-huh. So this is not a good way to figure out the name of the pool from the device the pool is on at all. If I knew what one was I might try and fix this, but instead I think I'll just stick with my little hack, thanks. I suppose you could do something with zpool status -L, for example see this snippet:
As you can see it becomes complicated when you take into account mirror and raid configurations. You can combine -L with -j for json like so:
[...]
And so on. It's probably more sensible to parse this. But I'm not gonna because my hack is working :)