Native ZFS Encryption Speed (Ubuntu 22.10)

I guess it's that time of year when I do ZFS encryption testing on the latest Ubuntu. Is ZFS speed better, worse, or the same?

Like the last time, I did testing on Framework laptop with i5-1135G7 processor and 64GB of RAM. The only change is that I am writing a bit more data during the test this time. Regardless, this is still mostly an exercise in relative numbers. Overall, the procedure is still basically the same as before.

With all that out of way, you can probably just look at the 22.04 figures and be done. While there are some minor differences between 22.10 and 22.04, there is nothing big enough to change any recommendation here. Even ZFS version reflects this as we see only a tiny bump from zfs-2.1.2-1ubuntu2 to zfs-2.1.5-1ubuntu2.

ZFS GCM is still the fastest when it comes to writing and it wins over LUKS by a wide margin. It was a surprise when I saw it with 22.04 and it's a surprise still. The surprise is not that ZFS is fast but why the heck is LUKS so slow. When it comes to reading speed, LUKS is still slightly faster but not by a wide margin.

With everything else the same, I would say ZFS GCM is a clear winner here with LUKS coming close second if you don't mind slower write speed. I would expect both to be indistinguishable when it comes to real-world scenarios most of the time.

Using CCM encryption doesn't make much sense at all. While encryption speed seems to benefit from disabling AES (part of which I suspect to be an artefact of my test environment), it's just a smidgen faster than GCM. Considering that any upgrade in future is going to bring AES instruction set, I would say GCM is the way forward.

Of course, the elephant in the room is the fact the native ZFS encryption doesn't actually cover all metadata. The only alternative that can help you with that is running ZFS on top of LUKS encryption. I honestly go back and forth between two with current preference being toward LUKS on laptop and the native ZFS encryption on servers where encrypted send/receive is a killer feature.

You can take a peek at the raw data and draw your own conclusions. As always, just keep in mind that these are just limited synthetic tests intended just to give you ballpark figure. Your mileage may vary.

5 thoughts to “Native ZFS Encryption Speed (Ubuntu 22.10)”

  1. Hi Josip.

    There are two flags that allow you to bypass dm-crypt workqueues: no-write-workqueue, no-read-workqueue. They make a big difference for fast storage devices (NVMe, ramdisk).

    They can be specified as parameters when formatting the device
    cryptsetup luksFormat … –perf-no_write_workqueue –perf-no_read_workqueue /dev/…

    or enable later
    cryptsetup –perf-no_read_workqueue –perf-no_write_workqueue –persistent refresh

    Of course you can also add options to /etc/crypttab if you don’t want them to be persistent:
    UUID= none no-read-workqueue,no-write-workqueue

    Both flags can be verified with
    cryptsetup luksDump /dev/… | grep Flags

    Longer story in “Speeding up Linux disk encryption” by Ignat Korchagin (2020-03-25)
    https://blog.cloudflare.com/speeding-up-linux-disk-encryption/

    Josip, could you test it on Framework laptop? TIA :-)

    1. Interestingly, the numbers are just slightly better as without those parameters. My best guess is that, due to ZFS already queueing the data, LUKS queueing doesn’t really come into the picture.

      I will include those arguments in the future tests but they’re not a silver bullet. :(

  2. Hey what do you think of this testing method that shows LUKS outperforms ZFS?
    https://www.reddit.com/r/zfs/comments/wdrfxp/testing_and_comparing_io_performance_with_and/
    https://github.com/jkool702/zfsEncryption_SpeedTest

    I did some testing with your script (extracted from the google sheet) and put the results in comments here:
    https://gist.github.com/digitalsignalperson/0da0cd70ab8c64f32583976cd4bd180b

    Running your script as-is I get average r/w speeds of:
    – raw: read: 7.28 GB/s; write: 3.22 GB/s
    – none: read: 2.90 GB/s; write: 1.88 GB/s
    – aes-128-gcm: read: 2.30 GB/s; write: 1.64 GB/s
    – aes-192-gcm: read: 2.34 GB/s; write: 1.66 GB/s
    – aes-256-gcm: read: 2.30 GB/s; write: 1.66 GB/s
    – aes-128-ccm: read: 342.00 MB/s; write: 263.20 MB/s
    – aes-192-ccm: read: 334.60 MB/s; write: 259.60 MB/s
    – aes-256-ccm: read: 335.20 MB/s; write: 255.00 MB/s
    – luks: read: 2.50 GB/s; write: 1.34 GB/s

    But if I reduce the number of test from {1..5} to just 1, change the sleep time from 13sec to 2sec, and only test [raw, none, aes-256-gcm, luks], I get drastically higher read performance with none and LUKS:

    – raw: read: 7.40 GB/s; write: 2.90 GB/s
    – none: read: 6.80 GB/s; write: 1.90 GB/s
    – aes-256-gcm: read: 1.80 GB/s; write: 1.60 GB/s
    – luks: read: 6.70 GB/s; write: 1.30 GB/s

    This seems consistent with the test method in that reddit link above. Any thoughts on what’s happening here? The other OP suggests there might be an ARC related bug: https://www.reddit.com/r/zfs/comments/wdrfxp/comment/ij1sw26/?utm_source=reddit&utm_medium=web2x&context=3

    Is the former or the latter test going to be more indicative of real world performance?

    1. > Hey what do you think of this testing method that shows LUKS outperforms ZFS?
      I think those actually match what you can read from my tests – ZFS is faster on writes, LUKS is faster on reads. That said, those tests use a slightly different metodology (e.g. mixed block size) so percentages are not really comparable but they are similar in broad strokes

      > But if I reduce the number of test from {1..5} to just 1, change the sleep time from 13sec to 2sec, and only test [raw, none, aes-256-gcm, luks], I get drastically higher read performance with none and LUKS:

      Not sure why you see that. My guess would be something to do with caching but it’s hard to say.

      > Is the former or the latter test going to be more indicative of real world performance?
      Neither test is appropriate for estimating performance, to be honest. They are just good to illustrate relative performance under limited circumstances and that’s that. If you want proper test, you do need to look at running these test directly on disk and using something that has mixed loads (e.g. fio).

Leave a Reply to Josip Medved Cancel reply

Your email address will not be published. Required fields are marked *