Friday, May 11, 2018
Since I had to reformat my RAID array anyway, I decided to give a go to ZFS. The reason for that is that BTRFS has proven a bit unstable for me last year, that my NAS is using it (so I can experiment anyway), and that ZFS gives me the potential to have a disk that I can read from practically any OS (whereas BTRFS is Linux-only for the foreseeable future).
Installing ZFS on Fedora is dead simple:
dnf install http://download.zfsonlinux.org/fedora/zfs-release$(rpm -E %dist).noarch.rpm gpg --quiet --with-fingerprint /etc/pki/rpm-gpg/RPM-GPG-KEY-zfsonlinux dnf install --allowerasing kernel-devel zfs
The --allowerasing option was necessary because there is a conflict with zfs-fuse, which is used by libvirt among other things.
For OSX, ZFS can be downloaded here.
Problem after reboot, I am not booting the right kernel:
[ 0.000000] Linux version 4.12.9-300.fc26.x86_64 (mockbuild@bkernel02.phx2.fedoraproject.org) (gcc version 7.1.1 20170622 (Red Hat 7.1.1-3) (GCC) ) #1 SMP Fri Aug 25 13:09:43 UTC 2017
I should be on a 4.16 kernel with Fedora 28. Apparently my boot menu was not properly updated. Apparently a side effect of my Linux dual-boot-with-shared-partitions experiments. Fixed with
grub2-install /dev/sda
Now my boot menu shows updated data. Booting shows I'm with the correct kernel image:
BOOT_IMAGE=//vmlinuz-4.16.6-302.fc28.x86_64 root=/dev/mapper/fedora-root ro rd.lvm.lv=fedora/root rd.lvm.lv=fedora/swap nomodeset intel_iommu=on iommu=pt modprobe.blacklist=nouveau pci-stub.ids=01de:1430,10de:0fba LANG=en_US.UTF-8
But the ZFS module is still not found:
modprobe zfs modprobe: FATAL: Module zfs not found in directory /lib/modules/4.16.6-302.fc28.x86_64
Reinstalling zfs. Still not good. Rebooting (aw! This machine is slow to boot!) Reinstalling zfs-dkms. Ah, now this runs "forever", maybe rebuilding some kernel module for real this time.
The process then spits a number of messages such as:
zfs.ko.xz: Running module version sanity check. - Original module - No original module exists within this kernel - Installation - Installing to /lib/modules/4.16.6-302.fc28.x86_64/extra/
But nothing gets installed:
find /lib/modules/4.16.6-302.fc28.x86_64/extra/ -name '*zfs*'
What is strange is that the installation process ends with "uninstall" messages:
DKMS: install completed. Running scriptlet: zfs-dkms-0.7.9-1.fc28.noarch 2/2Uninstall of zfs module (zfs-0.7.9-1) beginning:
-------- Uninstall Beginning -------- Module: zfs Version: 0.7.9 Kernel: 4.16.6-302.fc28.x86_64 (x86_64) -------------------------------------
Status: Before uninstall, this module version was ACTIVE on this kernel. Removing any linked weak-modules
zavl.ko.xz: - Uninstallation - Deleting from: /lib/modules/4.16.6-302.fc28.x86_64/extra/ rmdir: failed to remove 'extra': Directory not empty - Original module - No original module was found for this module on this kernel. - Use the dkms install command to reinstall any previous module version.
znvpair.ko.xz: - Uninstallation - Deleting from: /lib/modules/4.16.6-302.fc28.x86_64/extra/ rmdir: failed to remove 'extra': Directory not empty - Original module - No original module was found for this module on this kernel. - Use the dkms install command to reinstall any previous module version.
zunicode.ko.xz: - Uninstallation - Deleting from: /lib/modules/4.16.6-302.fc28.x86_64/extra/ rmdir: failed to remove 'extra': Directory not empty - Original module - No original module was found for this module on this kernel. - Use the dkms install command to reinstall any previous module version.
zcommon.ko.xz: - Uninstallation - Deleting from: /lib/modules/4.16.6-302.fc28.x86_64/extra/ rmdir: failed to remove 'extra': Directory not empty - Original module - No original module was found for this module on this kernel. - Use the dkms install command to reinstall any previous module version.
zfs.ko.xz: - Uninstallation - Deleting from: /lib/modules/4.16.6-302.fc28.x86_64/extra/ rmdir: failed to remove 'extra': Directory not empty - Original module - No original module was found for this module on this kernel. - Use the dkms install command to reinstall any previous module version.
zpios.ko.xz: - Uninstallation - Deleting from: /lib/modules/4.16.6-302.fc28.x86_64/extra/ rmdir: failed to remove 'extra': Directory not empty - Original module - No original module was found for this module on this kernel. - Use the dkms install command to reinstall any previous module version.
icp.ko.xz: - Uninstallation - Deleting from: /lib/modules/4.16.6-302.fc28.x86_64/extra/ rmdir: failed to remove 'extra': Directory not empty - Original module - No original module was found for this module on this kernel. - Use the dkms install command to reinstall any previous module version.
depmod.....
DKMS: uninstall completed.
------------------------------ Deleting module version: 0.7.9 completely from the DKMS tree. ------------------------------ Done. Erasing : zfs-dkms-0.7.9-1.fc28.noarch 2/2 Verifying : zfs-dkms-0.7.9-1.fc28.noarch 1/2 Verifying : zfs-dkms-0.7.9-1.fc28.noarch 2/2
Reinstalled: zfs-dkms.noarch 0.7.9-1.fc28
So it looks like the reinstall process ends with uninstalling stuff it just built??? Filed a bug against ZFS, because I think this is not expected.
Manually installing the DKMS with dkms install worked though.
ZFS pools
Recreated data as a ZFS raidz pool with
zpool create data raidz /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf
A bit concerned that the SCSI disk names are not constant, e.g. if I insert some other USB disk. But I guess ZFS is smart enough to scan the disks for its metadata.
Bash autocompletion does not work well with ZFS commands.
Created a few volumes on the ZFS pool, started restoring my pet VM backing stores on it. Restoration happens at slightly over 100MB/s, which is a bit disappointing, but maybe more a limitation of the read than the write.
It took a couple of hours to get my VMs up and running, but at least my three pet VMs are back there. I will need to reconstruct the secondary VMs, but that's OK.
Time machine backup
Tried to put a Time Machine backup there, but so far not lucky.
Ah, finally got from this blog that the important part when you add a subvolume is to afterwards run
chcon -R -t samba_share_t /data
Otherwise, the individual subvolumes are not exported. I hope this only applies to volumes, otherwise it's going to be tedious Also, of course, you need to have the correct permissions, otherwise you can't write.
As for configuration, got the following to get it working (from this site):
[global] workgroup = DINECHIN security = user passdb backend = tdbsam printing = cups printcap name = cups load printers = yes cups options = raw durable handles = yes fruit:aapl = yes fruit:time machine = yes fruit:advertise_fullsync = true[backups] comment = Time Machine Backup Disk browsable = yes writable = yes create mode = 0600 directory mode = 0700 kernel oplocks = no kernel share modes = no posix locking = no vfs objects = catia fruit streams_xattr path = /data/backups
Also, added the target with the command line, gives better feedback:
sudo tmutil setdestination -ap smb://ddd@turbo/backups
The cool thing with ZFS is that I can set a per-machine quota (i.e. since this machine has a 500G disk, I can set the backup quota to 1T). Also the backup appears quite a bit faster than on my small local USB disk, despite being over the network.
Time to restore the rsync-based backup of my NAS, and everything should be peachy.