[Discuss] Adventures in Modern Nextclouding (with KVM) (on ZFS)

Rich Pieri richard.pieri at gmail.com
Sun Jun 4 11:46:54 EDT 2023


Just some notable notes from my experience the past week getting
Nextcloud working the way I want.

I still hate Docker. The more I try to do simple things with it, the
more ways I see how terrible the core design is. Yes, I get it that it
scales out very well, but due to early design decisions made at Docker
Inc, it doesn't scale *in* at all.

Snap packages are much preferred at scale-in/scale-down.

But I do need to keep the Nextcloud services separated from the main
server to avoid conflicts on specific IP address/port combinations and
to keep it from being accessible from outside the home LAN. I decided
to run it as a small Ubuntu KVM guest on the Debian host, with ZFS
storage underneath.

When documentation and forum posts say KVM guests need a bridge
interface for connectivity to the local LAN, they frequently omit the
fact that the virbr0 interface created by libvirtd is *not* the needed
bridge. The KVM host's physical interfaces need to be put behind their
own bridge interface with this interface configured with the LAN IP
settings.

qcow2 image on ZFS dataset performance is perfectly reasonable but
performance can be improved by creating the dataset with a 64K record
size, matching qcow2's default record size:

zfs create -o recordsize=64k pool/nc

While I'm here, enable POSIX ACLs and set the extended attribute
storage type on the user data dataset. Setting SA-based xattrs is worth
a 3-times performance improvement for ACL access, definitely worth.
This will be important later.

zfs create pool/nc/${USER}
zfs set acltype=posixacl pool/nc/${USER}
zfs set xattr=sa pool/nc/${USER}

Note that this is specific to ZFS on Linux.

First go at this I put the user storage on a qcow2 vdisk. It worked but
it was slow, and I couldn't access files from the host without running
a sync tool to replicate everything. Didn't like that.

Reworked to use a NFS volume exported from the host. Got host access to
files this way but NFS itself is very CPU-intensive because it's a pig
that way.

Third try I started experimenting with filesystem passthrough. See
previous post about ACLs. libvirtd passthrough uses Plan 9 (9p), and
these modules typically aren't included in the kernel or default
initrd. Fix this in the guest:

vi /etc/initramfs-tools/modules

9p
9pnet
9pnet_virtio

update-initramfs -u

Shut down the VM. Configure a filesystem "device" in the VM manager,
give it a useful name (eg, nc-$USER) and the path to the dataset (eg,
/pool/nc/${USER}). Start the VM and mount the volume in /etc/fstab:

nc-$USER   /var/snap/nextcloud/common/nextcloud/data/$USER   9p
trans=virtio   0  2

The long mount path puts the data volume within the default storage
space presented by various Nextcloud clients. Note: since this is
running as a snap, using the External Storage app still requires
mounting the volume somewhere within the Nextcloud snap's fence.

Now to get host access. Host filesystem access is via the UID/GID that
the KVM thread is running as (libvirt-qemu on my system). Want to
permit other users access at the host filesystem level. The ACL
settings from earlier come into play:

Clear any ACLs that might be lying around from testing:

sudo setfacl -Rb /pool/nc/${USER}

Set defaults for the POSIX owner. We only care about the files
directory:

setfacl -R -d -m "u::rwx,g::---,o::---" /pool/nc/${USER}/files
setfacl -R -m "u::rwx,g::---,o::---" /pool/nc/${USER}/files

And finally, grant read-only access to the human user:

sudo setfacl -R -d -m "u:${USER}:r-x" /pool/nc/${USER}/files
sudo setfacl -R -m "u:${USER}:r-x" /pool/nc/${USER}/files

The last piece is backups of the KVM guest. The plan is to freeze the
guest filesystem (virsh domfsfreeze nextcloud), snapshot the ZFS
dataset, then unfreeze the guest. ZFS send/receive should Just Work(tm)
at that point. But this needs to be tested.

-- 
\m/ (--) \m/


More information about the Discuss mailing list