On a pure-zfs system, traditional filesystem statistics don't make much sense. The presented size of the filesystem, for example, is the sum of available pool space plus the data stored in the specific dataset, resulting in a variable size that doesn't reflect the actual pool size or percent capacity at all. In addition, the ease with which datasets can be created can lead to a million-filesystem syndrome, as illustrated:
(ad48aa05)[cyberleo@vitani ~]$ fastfetch -l none -s Disk
Disk (/): 5.80 GiB / 340.74 GiB (2%) - zfs
Disk (/home): 23.00 KiB / 334.93 GiB (0%) - zfs
Disk (/home/cyberleo): 12.46 MiB / 334.95 GiB (0%) - zfs
Disk (/srv/motion): 68.38 GiB / 403.32 GiB (17%) - zfs
Disk (/usr/obj): 8.13 GiB / 343.06 GiB (2%) - zfs
Disk (/usr/src): 3.47 GiB / 338.40 GiB (1%) - zfs
Disk (/var/audit): 23.00 KiB / 334.93 GiB (0%) - zfs
Disk (/var/chroot): 188.50 KiB / 334.93 GiB (0%) - zfs
Disk (/var/crash): 23.00 KiB / 334.93 GiB (0%) - zfs
Disk (/var/log): 1.41 GiB / 336.34 GiB (0%) - zfs
Disk (/var/mail): 135.52 MiB / 335.07 GiB (0%) - zfs
Disk (/var/tmp): 23.00 KiB / 334.93 GiB (0%) - zfs
None of this is terribly useful, and is in fact quite overwhelming.
Instead of listing mountpoint statistics, I propose listing zfs pools separately, with pool capacity and usage (and maybe frag percent).
Pool (vitani): 99.6 GiB / 464 GiB (21%, 11% frag)
Data for this can be pulled from zpool or zfs, although the two may disagree in more complex pool configurations (for a raidz pool, zpool will present total pool capacity and consumption without RAID redundancy, zfs will present it with redundancy factored in). Either can be useful.
ZFS filesystems that are configured with quotas and reservations have useful used and free properties, but those can be individually selected via --disk-folders.
Wanted features:
ZFS pool capacity statistics
Motivation:
On a pure-zfs system, traditional filesystem statistics don't make much sense. The presented size of the filesystem, for example, is the sum of available pool space plus the data stored in the specific dataset, resulting in a variable size that doesn't reflect the actual pool size or percent capacity at all. In addition, the ease with which datasets can be created can lead to a million-filesystem syndrome, as illustrated:
None of this is terribly useful, and is in fact quite overwhelming.
Instead of listing mountpoint statistics, I propose listing zfs pools separately, with pool capacity and usage (and maybe frag percent).
Data for this can be pulled from zpool or zfs, although the two may disagree in more complex pool configurations (for a raidz pool, zpool will present total pool capacity and consumption without RAID redundancy, zfs will present it with redundancy factored in). Either can be useful.
ZFS filesystems that are configured with quotas and reservations have useful used and free properties, but those can be individually selected via --disk-folders.