To increase performance of a ZFS pool I decided to use read cache in the form of SSD partition. As always with ZFS, certain amount of micromanagement is needed for optimal benefits.
Usual recommendation is to have up to 10 GB of cache for each 1 GB of available RAM since ZFS keeps headers for cached information always in RAM. As my machine had total of 8 GB, this pretty much restricted me to the cache size in 60es range.
To keep things sane, I decided to use 48 GB myself. As sizes go, this is quite unusual one and I doubt you can even get such SSD. Not that it mattered as I already had leftover 120 GB SSD laying around.
Since I already had Nas4Free installed on it, I checked partition status
# gpart status Name Status Components da1s1 OK da1 ada1s1 OK ada1 ada1s2 OK ada1 ada1s3 OK ada1 ada1s1a OK ada1s1 ada1s2b OK ada1s2 ada1s3a OK ada1s3
and deleted the last partition:
# gpart delete -i 3 ada1 ada1s3 deleted
Then we have to create partition and label it (optional):
# gpart add -t freebsd -s 48G ada1 ada1s3 added # glabel label -v cache ada1s3
As I had encrypted data pool, it only made sense to encrypt cache too. For this it is very important to check physical sector size:
# camcontrol identify ada1 | grep "sector size" sector size logical 512, physical 512, offset 0
Whichever physical sector size you see there is one you should give to geli as otherwise you will get permanent ZFS error status when you add cache device. It won’t hurt the pool but it will hide any real error going on so it is better to avoid. In my case, physical sector size was 512 bytes:
# geli init -e AES-XTS -l 128 -s 512 /dev/label/cache # geli attach /dev/label/cache
Last step is adding encrypted cache to our pool:
# zpool add Data cache label/cache.eli
All left is to enjoy the speed. :)
Other ZFS posts in this series:
[2018-07-22: NAS4Free has been renamed to XigmaNAS as of July 2018]