Closed Cyberes closed 1 year ago
ZFS is supported. I did some tests, created a raidz2
pool and added data.
My pool at the beginning:
# zpool list -v
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
data 2.75G 514K 2.75G - - 1% 0% 1.00x ONLINE -
raidz2-0 2.75G 514K 2.75G - - 1% 0.01% - ONLINE
vdb 1.99G - - - - - - - ONLINE
vdc 2.99G - - - - - - - ONLINE
vdd 1014M - - - - - - - ONLINE
What Linux says - note the difference:
# df -hT
Filesystem Type Size Used Avail Use% Mounted on
data zfs 807M 128K 807M 1% /data
The check-plugin says the same:
# /usr/lib64/nagios/plugins/disk-usage
Mountpoint ! Type ! Size ! Used ! Avail ! Use%
-----------+------+-----------+----------+----------+-------
/data ! zfs ! 806.9MiB ! 128.0KiB ! 806.8MiB ! 0.0%
According to df
, adding 600M data to the pool should result in round about 75% usage:
# dd if=/dev/urandom of=/data/test bs=10M count=60 && sync
Let's check:
# df -hT
Filesystem Type Size Used Avail Use% Mounted on
data zfs 807M 599M 209M 75% /data
Plugin:
# /usr/lib64/nagios/plugins/disk-usage
Mountpoint ! Type ! Size ! Used ! Avail ! Use%
-----------+------+-----------+----------+----------+-------
/data ! zfs ! 806.9MiB ! 598.1MiB ! 208.8MiB ! 74.1%
So perfectly aligned. ZFS says:
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
data 2.75G 1.76G 1014M - - 0% 63% 1.00x ONLINE -
raidz2-0 2.75G 1.76G 1014M - - 0% 64.0% - ONLINE
According to the plugin, adding additonal 250M data should fail, whereas zpool list
suggests that it should work. Let's see:
# dd if=/dev/urandom of=/data/test2 bs=10M count=25 && sync
dd: error writing '/data/test2': No space left on device
# df -hT
Filesystem Type Size Used Avail Use% Mounted on
data zfs 807M 807M 512K 100% /data
# /usr/lib64/nagios/plugins/disk-usage
Mountpoint ! Type ! Size ! Used ! Avail ! Use%
-----------+------+-----------+----------+----------+------------------
/data ! zfs ! 806.9MiB ! 806.4MiB ! 512.0KiB ! 99.9% [CRITICAL]
# zpool list
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
data 2.75G 2.37G 387M - - 12% 86% 1.00x ONLINE -
So don't rely on zpool list
.
man zpoolprops
says about free
: _The amount of free space available in the pool. By contrast, the zfs(8) available property describes how much new data can be written to ZFS filesystems/volumes. The zpool free property is not generally useful for this purpose, and can be substantially more than the zfs available space. This discrepancy is due to several factors, including raidz parity; zfs reservation, quota, refreservation, and refquota properties; and space set aside by spa_slopshift (see zfs(4) for more information).
Huh, thanks for the thorough investigation. This is a zpool managed by Proxmox so I assume it's doing something on its end that isn't carrying over to df
(like maybe subvols).
This issue respects the following points:
Which variant of the Monitoring Plugins do you use?
Bug description
disk-usage
doesn't support ZFS zpools:Compare this to
zpool list
:Steps to reproduce - Plugin call
disk-usage
Steps to reproduce - Data
Just run
disk-usage
on a system with a zpool.Environment
Linux eloo 6.2.16-3-pve #1 SMP PREEMPT_DYNAMIC PVE 6.2.16-3 (2023-06-17T05:58Z) x86_64 GNU/Linux
Plugin Version
disk-usage: v2023071203 by Linuxfabrik GmbH, Zurich/Switzerland
Python version
Python 3.11.2
List of Python modules
No response
Additional Information
This is a pretty easy fix. I'll leave this example code if you'd like to implement something similar: