CAPTCHA
Image CAPTCHA
Enter the characters shown in the image.
This question is for testing whether or not you are human.
  • Create new account
  • Reset your password

User account menu

Home
The Hyperlogos
Read Everything

Main navigation

  • Home
  • My Resumé
  • blog
  • Howtos
  • Pages
  • Contact
  • Search

Setting up ZFS SLOG, and further adventures with duplicate rpool names

Breadcrumb

  • Home
  • User Blogs
  • User Blog
  • Setting up ZFS SLOG, and further adventures with duplicate rpool names
By drink | Thu February 23, 2023

Without getting [too] technical (and probably operating "above my pay grade") the ZFS ZIL is a place where synchronous writes are done, and you can move this to a separate zpool, which is called a SLOG. One SLOG can store the ZIL for multiple zpools. While most writes by most applications are asynchronous, some are not, and file metadata writes are synchronous. Putting the ZIL for a HDD (or even an array) on a SSD lets your zpool provide the IOPS performance of your SLOG, while also avoiding fragmentation and reducing seeks through ordered writes.

I should stop here and note that the right thing to do if you care about your data is to use a pair of devices in a mirror for your SLOG, which protects you from data loss if one of the devices should fail. However, the SLOG is only read when the power fails when data in memory has not been written to the target zpool, so the risk is relatively small. I am also using the SLOG to improve performance with manipulation of relatively low-value data stored on a scratch disk, which could be regenerated.

Now that I've got my desktop system running Devuan, I'm relocating the 512GB M.2 SSD into my laptop, a HP Ryzen 3 system which came with a 128GB SATA SSD. It only accepts 80mm (standard long) M.2 cards. When I bought my Ryzen 5 desktop (as a MB/CPU/SSD/GPU/RAM bundle, used on eBay) it came with a SK Hynix HFM256GDHTNI-87A0B 256GB NVMe SSD, and now I'm using that as a ZFS SLOG device. It's nothing to write home about, but it's still a lot faster than my formerly-external WD10EADS. I have two of them in a mirrored zpool for bulk storage on my desktop. But I had, you guessed it, another Linux install on this SSD, with root on ZFS, and a root pool named rpool, and this wound up preventing me from booting until I did some thrashing (and searching) around to find out how to recover.

I wound up looking at an emergency shell. The first step to recovery was to simply do a zfs import -a, which did successfully discover my rpool. Running zfs import again (without options) then showed me the unimportable (and now broken) second rpool. Long story short, the solution was to simply create a new rpool right "over the top" of the old one (i.e. without doing anything further to remove it) on the device in question, then remove it. The system accepted this gracefully, and when I rebooted I was able to get my system instead of the dracut shell.

Per the ZFS SLOG device link above, here's how I added the device (with the temporary zpool removed from it) to my storage zpool:

zpool add -f tank log /dev/disk/by-id/nvme-SKHynix_HFM256GDHTNI-87A0B_CS07N822710207T32

This is how I verified that it'd been added:

# zpool status tank
  pool: tank
 state: ONLINE
  scan: resilvered 107M in 00:00:03 with 0 errors on Wed Feb 22 03:09:21 2023
config:

        NAME                                                 STATE     READ WRITE CKSUM
        tank                                                 ONLINE       0     0     0
          mirror-0                                           ONLINE       0     0     0
            ata-WDC_WD10EADS-00L5B1_WD-WCAU4C295216          ONLINE       0     0     0
            sdb                                              ONLINE       0     0     0
        logs
          nvme-SKHynix_HFM256GDHTNI-87A0B_CS07N822710207T32  ONLINE       0     0     0

errors: No known data errors

Here's how to remove it again:

zpool remove tank nvme-SKHynix_HFM256GDHTNI-87A0B_CS07N822710207T32

And here's how to see that it's actually doing something:

# arc_summary -s zil

------------------------------------------------------------------------
ZFS Subsystem Report                            Thu Feb 23 08:49:45 2023
Linux 5.10.0-21-amd64                                            2.0.3-9
Machine: alexander (x86_64)                                      2.0.3-9

ZIL committed transactions:                                         1.8M
        Commit requests:                                          183.3k
        Flushes to stable storage:                                182.6k
        Transactions to SLOG storage pool:            4.7 GiB     165.6k
        Transactions to non-SLOG storage pool:      135.9 MiB       8.8k

You can see clearly the volume of transactions using and not using SLOG. I used dbench with 1-4 clients (processes simultaneously writing to disk) to determine what the performance impact looks like. Unsurprisingly, it's really quite massive.

[block:7]

[block:8]

As you can clearly see, even momentary write performance to the array without a SLOG never exceeds 140MB/sec, and rapidly cools down to under 80MB/sec at best. With the SLOG, even with only one process writing it always delivers over 200MB/sec sustained, and peaks at around 700MB/sec with 3-4 clients. When writing cools off, there is a flurry of disk activity as the SLOG is flushed to the mirror. The more disks you have in your array, and the faster they are, the less benefit you'll derive from a SLOG.

ZFS
linux
howto

drink

2 years 7 months ago

Permalink

As you can see above, I had…

As you can see above, I had the wrong device name set up in my zpool. Since it's only a storage volume, I was able to correct the problem on the live system. Simply export the zpool (in this case, "zpool export tank") and then re-import it with the -d directory flag. This specifies that you would like to use device names from that directory, for example "zpool import -d /dev/disk/by-id tank". I also had some wwn-id links in there, and the first time I did that it picked up one of those, so I deleted them and went through the cycle again to get the desired result.

$ zpool status tank
[...]
          mirror-0                                           ONLINE       0     0     0
            ata-WDC_WD10EADS-00L5B1_WD-WCAU4C295216          ONLINE       0     0     0
            ata-WDC_WD10EAVS-00D7B0_WD-WCAU41874882          ONLINE       0     0     0
[...]

 

  • Log in or register to post comments
  • Log in or register to post comments

Footer menu

  • Contact
Powered by Drupal

Copyright © 2025 Martin Espinoza - All rights reserved