Compiling and Installing OpenZFS 2.3.0
By now OpenZFS 2.3.0 has not been officially released yet, with the latest version being 2.3.0-rc3. Therefore, you need to compile it yourself. The process is relatively simple:
First, install a relatively new kernel (the Debian kernel version on my NAS is 6.1.0):
1
|
apt install linux-image-6.9.7+bpo-amd64 linux-headers-6.9.7+bpo-amd64
|
Once completed, follow the relevant steps in the official ZFS documentation to compile it.
1
2
3
4
5
6
7
8
9
10
|
apt install alien autoconf automake build-essential debhelper-compat dh-autoreconf dh-dkms dh-python dkms fakeroot gawk git libaio-dev libattr1-dev libblkid-dev libcurl4-openssl-dev libelf-dev libffi-dev libpam0g-dev libssl-dev libtirpc-dev libtool libudev-dev parallel po-debconf python3 python3-all-dev python3-cffi python3-dev python3-packaging python3-setuptools python3-sphinx uuid-dev zliblg-dev
git clone https://github.com/openzfs/zfs
cd ./zfs
git checkout zfs-2.3.0-rc3
sh autogen.sh
./configure
make -s -j$(nproc) native-deb-utils
cd ..
rm openzfs-zfs-dracut_*.deb openzfs-zfs-initramfs_2.3.0-1_all.deb
sudo dpkg -i ./openzfs-zfs-zed_2.3.0-1_amd64.deb ./openzfs-zfs-dkms_2.3.0-1_all.deb ./openzfs-libuutil3_2.3.0-1_amd64.deb ./openzfs-libzfs6_2.3.0-1_amd64.deb ./openzfs-libnvpair3_2.3.0-1_amd64.deb ./openzfs-zfsutils_2.3.0-1_amd64.deb
|
Once the above steps are successfully completed, simply restart the system. After restarting, you may need to manually run zpool import -a
to re-import the pools.
Upgrading a Zpool to Use RAIDZ Expansion
This step is very straightforward. Assuming the zpool you need to upgrade is named sata-critical, you can do it with a single command:
1
|
zpool upgrade sata-critical
|
Adding Disks to a Zpool
This step is also very simple:
1
|
zpool attach sata-critical raidz2-0 /dev/disk/by-id/ata-HGST_HUS726XXXXXXXXX_XXXXXXXX
|
After that, you can use zpool status
to check the progress.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
|
$ zpool status sata-critical
pool: sata-critical
state: ONLINE
scan: scrub repaired 0B in 05:19:56 with 0 errors on Sun Nov 17 10:31:03 2024
expand: expansion of raidz2-0 in progress since Sun Nov 17 18:48:14 2024
1.48T / 17.9T copied at 171M/s, 8.30% done, 1 days 03:56:00 to go
config:
NAME STATE READ WRITE CKSUM
sata-critical ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
ata-HGST_HUS726XXXXXXXXX_XXXXXXXX ONLINE 0 0 0
ata-HGST_HUS726XXXXXXXXX_XXXXXXXX ONLINE 0 0 0
ata-HGST_HUS726XXXXXXXXX_XXXXXXXX ONLINE 0 0 0
ata-HGST_HUS726XXXXXXXXX_XXXXXXXX ONLINE 0 0 0
ata-HGST_HUS726XXXXXXXXX_XXXXXXXX ONLINE 0 0 0
errors: No known data errors
|
Some Minor Drawbacks of Expansion
Suppose you set up a RAIDZ2 zpool using five new 6TB disks. Theoretically, the available space should be 15.643519TB (calculation reference), but after completion, the actual available space is only 13.08TB. The specific results are as follows:
1
2
3
4
5
6
7
8
9
10
11
12
|
$ zfs list sata-critical
NAME USED AVAIL REFER MOUNTPOINT
sata-critical 8.66T 4.42T 8.66T /sata-critical
$ zpool list sata-critical -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
sata-critical 27.3T 17.9T 9.39T - - 2% 65% 1.00x ONLINE -
raidz2-0 27.3T 17.9T 9.39T - - 2% 65.6% - ONLINE
ata-HGST_HUS726XXXXXXXXX_XXXXXXXX 5.46T - - - - - - - ONLINE
ata-HGST_HUS726XXXXXXXXX_XXXXXXXX 5.46T - - - - - - - ONLINE
ata-HGST_HUS726XXXXXXXXX_XXXXXXXX 5.46T - - - - - - - ONLINE
ata-HGST_HUS726XXXXXXXXX_XXXXXXXX 5.46T - - - - - - - ONLINE
ata-HGST_HUS726XXXXXXXXX_XXXXXXXX 5.46T - - - - - - - ONLINE
|
After a bit of research, the reason seems to be related to the implementation of RAIDZ expansion. For more details, you can refer to this article: ZFS RAIDZ Expansion Is Awesome but Has a Small Caveat