Serviced uses thin pools with fragile FS' atop them (XFS and EXT4), while performing a bunch of block operations beneath those FS' with occasional mishaps occurring as a result. Moreover, operators collecting reasonable amounts of production data will want to preserve, compress, deduplicate, and back it up in a consistent manner - all of which can be done with ZFS and (AFAIK) BTRFS/HAMMER/etc.
We build serviced for Arch Linux internally (well, CI does it for us), and our little Arch fork is ZFS-native (built into the kernel), with root mounts and most workloads on it (unless we need XFS/F2FS/etc for specific things). Using ZFS for docker storage has pretty much fixed our corruption issues with serviced, as well as having it back the media underneath the thin pool since its operations are transactional/atomic.
However, the requirement for thin pools with more VFS atop them becomes redundant and reduces the efficacy of the underlying advanced storage (ZFS' default recordsize is 128K, and these days dnodes are dynamically sized). If we truly need block storage, its baked right in, but what's really needed for serviced's "volume" semantic is the ability to control datasets (also handy to be able to leverage the NFS integration for "DFS").
This would also benefit the commercial deployments out there as the infrastructure improvement yields much more efficient storage (ZSTD, of varying depths is available), improves performance (ARC is much smarter than Linux' pagecache and fewer bytes are accessed when compressed), and most importantly provides guarantees for the integrity of data on-disk which are not possible with the current approach (EXT4 and XFS do not perform cryptographically checksummed and immediately validated writes, before returning a successful commit to the caller). Concurrently, ZFS has inline crypto and allowing serviced to manage that would deliver additional security boundaries to customers and community users with zero effort to actually implement any of that (the FS' do the work, the interface to them appears to be missing). While ZFS may not be GPL, it is battle-hardened and well-proven. If BTRFS no longer stands for "bit rot filesystem," then it might be worth a shot too.
Serviced uses thin pools with fragile FS' atop them (XFS and EXT4), while performing a bunch of block operations beneath those FS' with occasional mishaps occurring as a result. Moreover, operators collecting reasonable amounts of production data will want to preserve, compress, deduplicate, and back it up in a consistent manner - all of which can be done with ZFS and (AFAIK) BTRFS/HAMMER/etc. We build serviced for Arch Linux internally (well, CI does it for us), and our little Arch fork is ZFS-native (built into the kernel), with root mounts and most workloads on it (unless we need XFS/F2FS/etc for specific things). Using ZFS for docker storage has pretty much fixed our corruption issues with serviced, as well as having it back the media underneath the thin pool since its operations are transactional/atomic. However, the requirement for thin pools with more VFS atop them becomes redundant and reduces the efficacy of the underlying advanced storage (ZFS' default recordsize is 128K, and these days dnodes are dynamically sized). If we truly need block storage, its baked right in, but what's really needed for serviced's "volume" semantic is the ability to control datasets (also handy to be able to leverage the NFS integration for "DFS"). This would also benefit the commercial deployments out there as the infrastructure improvement yields much more efficient storage (ZSTD, of varying depths is available), improves performance (ARC is much smarter than Linux' pagecache and fewer bytes are accessed when compressed), and most importantly provides guarantees for the integrity of data on-disk which are not possible with the current approach (EXT4 and XFS do not perform cryptographically checksummed and immediately validated writes, before returning a successful commit to the caller). Concurrently, ZFS has inline crypto and allowing serviced to manage that would deliver additional security boundaries to customers and community users with zero effort to actually implement any of that (the FS' do the work, the interface to them appears to be missing). While ZFS may not be GPL, it is battle-hardened and well-proven. If BTRFS no longer stands for "bit rot filesystem," then it might be worth a shot too.