A Gentle Introduction to ZFS, Part 1: Compiling ZFS on Ubuntu 18.04 (aarch64)

OpenZFS Logo

The Zettabyte File System (ZFS) is a file system and volume manager in one tool. I’ll be using ZFS to create a NAS from:

Syba PCI-e Mini SAS Expander (SI-PEX40137)

Note that I don’t actually recommend the Syba today. The mini SAS ports will work on the Syba, but try the MZHOU B082D6XSZN instead. The MZHOU contains the same Marvell 9215 ASIC as the Syba but with 8x SATA III ports directly integrated on the expansion card. The Syba has mini SAS to SATA converter cables, but the SATA III ports make this specialized cable unnecessary.

I’ll post more on the hardware architecture later, but for this post, I’ll be focusing on compiling ZFS from thev2.0.0-rc1 branch (the latest at the time of writing).

Why ZFS?

ZFS makes it very easy to set up a RAID pool with more flexibility beyond striping and/or mirroring. For example, ZFS allows one to also:

  • distribute parity and spares over all drives, so that one to three drive failures can completely restore the missing data (depends on configuration),
  • create instant snapshot backups that can be rolled back just as easily,
  • be more robust to data corruption and system crashes, and
  • store zettabytes in the file system (although there are some practical limitations to this).

On the Jetson TX2 & aarch64

You may be wondering why I didn’t just convert an old x64 computer. The answer is that I happened to have a Jetson TX2 sitting around (it got replaced by an Xavier NX). Also, the SOM is lower power than ATX computers, while also being more powerful — especially in RAM and GPU power — than most other single board computers. For a NAS running 24/7, using an embedded platform like the Jetson TX2 saves energy and reduces the system TDP.

The Jetson TX2 presents some challenges with ZFS. One challenge is that the computing architecture is aarch64 and not officially supported with easy installer packages by ZFS, NVIDIA, or Ubuntu developers. ZFS on aarch64 does go through build tests, but not runtime tests. Fortunately, it’s easy enough to build from source.

Arch Linux Logo

Also, for completeness, if one has access to the Arch User Repository (AUR), there appears to be an AUR package for v2.0.0-rc1, but I have not personally tested this (let me know if you do):

$ yay -Ss zfs-utils-git 
aur/zfs-utils-git 2:2.0.0rc1.r38.gcd80273909-1 (+5 0.00)
Userspace utilities for the Zettabyte File System.

Hardware and OS Software

  • NVIDIA Jetson TX2 (8GB) developer kit. The kit basically consists of two parts: 1) the System-on-Module (SOM), and 2) the carrier board. One way to think of this pair is that the SOM is the computer, and the carrier board is the I/O and power interface.
The Jetson TX2 Development Kit

NVIDIA describes its Jetson platform the best:

NVIDIA® Jetson™ systems provide the performance and power efficiency to run autonomous machines software, faster and with less power. Each is a complete System-on-Module (SOM), with CPU, GPU, PMIC, DRAM, and flash storage—saving development time and money. Jetson is also extensible. Just select the SOM that’s right for the application, and build the custom system around it to meet its specific needs.

The Jetson TX2 series embedded module for edge AI applications now comes in three versions: Jetson TX2, Jetson TX2i, and the newly available, lower-cost Jetson TX2 4GB. Jetson TX1-based products can migrate to the more powerful Jetson TX2 4GB at the same price.

I’ll mention a possible alternative: the Jetson AGX Xavier Developer Kit (32GB). The additional RAM above the TX2 is crucial for higher capacity ZFS arrays past approximately 10TB. ZFS has an adaptive replacement cache (ARC), which aims to reduce expensive disk access by storing commonly used data in RAM, similar to another data structure, the LRU. The ARC generally takes up all RAM less 1GB, and so more RAM means more cached data and faster disk access. The greatest disadvantage of the Xavier dev kit is the cost at $700 + tax/shipping.

  • Ubuntu 18.04. Verify with the following command line:
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 18.04.5 LTS
Release: 18.04
Codename: bionic
  • Linux for Tegra (L4T). To find out which version one has, cat the Tegra release file. My version is R32.4.4.
$ cat /etc/nv_tegra_release
# R32 (release), REVISION: 4.4, GCID: 23942405, BOARD: t186ref, EABI: aarch64

If the Jetson TX2 needs to be reflashed, use the SDK manager to easily perform this step. I opted for the command-line installer rather than the GUI installer.

Compile ZFS from Source

Install Package Dependencies

Install the package dependencies:

$ sudo apt install -y alien autoconf automake build-essential dkms fakeroot gawk gdebi-core libacl1-dev libaio-dev libattr1-dev libblkid-dev libdevmapper-dev libelf-dev libselinux-dev libssl-dev libtool libudev-dev nfs-kernel-server python3 python3-dev python3-cffi python3-setuptools uuid-dev zlib1g-dev

The Linux Kernel headers also need to be installed… unless we’re using NVIDIA JetPack. L4T kernel sources and headers are installed upon flashing the Jetson TX2. For example:

If one happens to not be compiling on L4T, then install the Linux headers for that particular kernel version:

$ sudo apt install linux-headers-$(uname -r)

Get and Configure ZFS Sources

Download the source. At the time of writing, only the master branch v2.0.0-rc1 has dRAID , a great improvement to RAIDZ from over 5 years in the making. Checkout v2.0.1 or greater, if available, since this will be the first stable release to have dRAID.

Update (1/18/2021): I just finished compiling v2.0.1, and mydraid storage pool was not detected. I’m not sure why it’s not available yet, but I’ll update the first release that seems to support it.

$ git clone https://github.com/openzfs/zfs
$ cd zfs
$ sh autogen.sh
$ ./configure

Compile ZFS

Compile the source and get some tea.

$ make -s -j$((`nproc`-1))
Take a Tea Break!

Install ZFS and Initialize

After ZFS is compiled, we can install ZFS. Install the compiled source, link the shared libraries, and generate the module dependencies (i.e. symbols). Then, add the ZFS kernel module to the L4T kernel.

$ sudo make install
$ sudo ldconfig
$ sudo depmod
$ sudo modprobe zfs

If one wants to uninstall ZFS, for whatever reason, run the following command lines:

$ sudo modprobe -r zfs
$ sudo make uninstall
$ sudo ldconfig
$ sudo depmod

Start ZFS services, if not started already:

$ sudo systemctl enable --now zfs-import-cache.service
$ sudo systemctl enable --now zfs-import-scan.service
$ sudo systemctl enable --now zfs-import.target
$ sudo systemctl enable --now zfs-mount.service
$ sudo systemctl enable --now zfs-share.service
$ sudo systemctl enable --now zfs-zed.service
$ sudo systemctl enable --now zfs.target

Test the Installation

Run a simple status command to verify that ZFS and the associated kernel module are running. With no pools setup and a properly loaded ZFS kernel module, one should see:

$ zpool status
no pools available

If instead one sees:

$ zpool status
The ZFS modules are not loaded.
Try running '/sbin/modprobe zfs' as root to load them.

…then, the kernel module is not loaded. Try to run modprobe zfs and/or check the syslog (i.e. /var/log/syslog) for clues.


  • ZFS and associated hardware was described: 1) Jetson TX2 developer kit, 2) Syba PCI-e Mini SAS Expander(SI-PEX40137), and 3) various TB hard drives.
  • ZFS v2.0.0-rc1 was compiled and initialized on L4T R32.4.4 (aarch64).

Next Article

In the next article, we’ll be exploring ZFS storage pool creation and making decisions about hard disk drives and expansion cards.

Further Resources

Imaging Systems Ninjaneer, Computer Vision, Photographer and Videographer, VR Athlete, Pianist