Open MauriceNino opened 1 year ago
@MauriceNino, can you provide output of some of the underlying commands on machines with ZFS?
Thank you in advance!
Yeah, I just added them in the Additional context
section. The output can be optimized with both commands though, so you can only request the necessary information, and omit the header line.
For more specific output, please tag @Jamy-L, as he is the user who has a ZFS system.
@MauriceNino oh, sorry ... I did not see this ;-) Will have a look at it.
I think the exact command you want to be using here is zpool list -v -H -P -p
(An explanation of the options can be found here).
Here is the output of the command on the users machine:
pool_0 1992864825344 779701190656 1213163634688 - - 0 39 1.00 ONLINE -
mirror-0 1992864825344 779701190656 1213163634688 - - 0 39 - ONLINE
/dev/sdb1 - - - - - - - - ONLINE
/dev/sdc1 - - - - - - - - ONLINE
The fields are (in order): name, size, allocated, free, checkpoint, expandsize, fragmentation, capacity, dedupratio, health, altroot
Hi @sebhildebrandt - did you get around looking into it already? :) If you need more information or anything, please let me know, I can ask users to provide more samples.
Hello, I'm just here to post some useful info that could help you to more easily implement it:
When using ZFS, we can extract the required informations from 2 essentials commands: zpool list
and zpool status
from zpool list
we can retrieve the list of available pools, their sizes and space used.
e.g. output
NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT
Aux_Pool 928G 6.73G 921G - - 0% 0% 1.00x ONLINE -
Aux_Pool_2 464G 74.0G 390G - - 1% 15% 1.00x ONLINE -
HA_Pool 1.81T 653G 1.17T - - 3% 35% 1.00x ONLINE -
from zpool status
we can retrive the pool types and disks assigned to them.
e.g. output
pool: Aux_Pool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
Aux_Pool ONLINE 0 0 0
usb-Touro_Mobile_57584731413936464A334455-0:0 ONLINE 0 0 0
errors: No known data errors
pool: Aux_Pool_2
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
Aux_Pool_2 ONLINE 0 0 0
ata-TOSHIBA_MQ01ABD050_42TDS25VS ONLINE 0 0 0
errors: No known data errors
pool: HA_Pool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
HA_Pool ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
ata-Hitachi_HDS723020BLA642_MN5210F32X3SMK ONLINE 0 0 0
ata-HDS723020ALA640_RSD_HUA_MK0171YFGDTNPA ONLINE 0 0 0
errors: No known data errors
After that, it's just a matter of parsing the output and connecting the dots. I hope this can be helpful.
Is your feature request related to a problem? Please describe. In my project Dash. I am using
systeminformation
to gather hardware information, which includes drives to show them in a self-hosted web dashboard.This works fine for drives & normal RAIDs, but fails for ZFS pools.
Describe the solution you'd like I would like to have similar output to what's given for RAIDs in
blockDevices()
. This includes:group
for drives that are included in the ZFS poolname: 'md...'
,type: 'raid.'
to the output with information about the pool sizeThen there needs to be a fix for the data in
fsSize()
, because that seems to rely solely ondf
which provides incorrect output for ZFS pools.zfs
should be checked against the output of the command[zfs|zpool] list
You could use either
zfs list
orzpool list
for gathering that information, the first one reports usable free space (because ZFS reserves some space), while the latter shows raw disk space. Maybe adding both to the output wouldn't hurt (e.g. one of typezfs
and one of typezfs_raw
). Generally, it seems like ZFS reserves exactly 3.2% of the total size, so it might not be necessary, as you could always calculate it if you had one of the values.Describe alternatives you've considered There is a library
zfs
which is basically a wrapper to the CLI tool, but it makes usage a bit harder, as the output data needs to be matched manually.Additional context In reference to https://github.com/MauriceNino/dashdot/issues/728, opened by @Jamy-L.
Here is the example output of the user as of right now:
Output
`systeminformation` data: ```js const disks = [ { device: '/dev/sdb', type: 'HD', name: 'WDC WD20EARS-00M', vendor: 'Western Digital', size: 2000398934016, bytesPerSector: null, totalCylinders: null, totalHeads: null, totalSectors: null, totalTracks: null, tracksPerCylinder: null, sectorsPerTrack: null, firmwareRevision: 'AB51', serialNum: '', interfaceType: 'SATA', smartStatus: 'unknown', temperature: null }, { device: '/dev/sdc', type: 'HD', name: 'WDC WD20EZRX-00D', vendor: 'Western Digital', size: 2000398934016, bytesPerSector: null, totalCylinders: null, totalHeads: null, totalSectors: null, totalTracks: null, tracksPerCylinder: null, sectorsPerTrack: null, firmwareRevision: '0A80', serialNum: '', interfaceType: 'SATA', smartStatus: 'unknown', temperature: null } ] const sizes = [ { fs: 'pool_0', type: 'zfs', size: 1273137856512, used: 131072, available: 1273137725440, use: 0, mount: '/mnt/host/mnt/pool_0', rw: false }, { fs: 'pool_0/media', type: 'zfs', size: 1788947726336, used: 515810000896, available: 1273137725440, use: 28.83, mount: '/mnt/host/mnt/pool_0/media', rw: false }, { fs: 'pool_0/cloud', type: 'zfs', size: 1415311654912, used: 142173929472, available: 1273137725440, use: 10.05, mount: '/mnt/host/mnt/pool_0/cloud', rw: false } ] const blocks = [ { name: 'sdb', type: 'disk', fsType: '', mount: '', size: 2000398934016, physical: 'HDD', uuid: '', label: '', model: 'WDC WD20EARS-00M', serial: '', removable: false, protocol: 'sata', group: '', device: '/dev/sdb' }, { name: 'sdc', type: 'disk', fsType: '', mount: '', size: 2000398934016, physical: 'HDD', uuid: '', label: '', model: 'WDC WD20EZRX-00D', serial: '', removable: false, protocol: 'sata', group: '', device: '/dev/sdc' }, { name: 'sdb1', type: 'part', fsType: 'zfs_member', mount: '', size: 2000389406720, physical: '', uuid: '10083813909764366566', label: 'pool_0', model: '', serial: '', removable: false, protocol: '', group: '', device: '/dev/sdb' }, { name: 'sdb9', type: 'part', fsType: '', mount: '', size: 8388608, physical: '', uuid: '', label: '', model: '', serial: '', removable: false, protocol: '', group: '', device: '/dev/sdb' }, { name: 'sdc1', type: 'part', fsType: 'zfs_member', mount: '', size: 2000389406720, physical: '', uuid: '10083813909764366566', label: 'pool_0', model: '', serial: '', removable: false, protocol: '', group: '', device: '/dev/sdc' }, { name: 'sdc9', type: 'part', fsType: '', mount: '', size: 8388608, physical: '', uuid: '', label: '', model: '', serial: '', removable: false, protocol: '', group: '', device: '/dev/sdc' } ] ``` `zpool list` ``` NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT pool_0 1.81T 726G 1.10T - - 0% 39% 1.00x ONLINE - ``` `zfs list` ``` NAME USED AVAIL REFER MOUNTPOINT pool_0 726G 1.05T 96K /mnt/pool_0 pool_0/cloud 133G 1.05T 133G /mnt/pool_0/cloud pool_0/media 593G 1.05T 593G /mnt/pool_0/media ``` `df` ``` Filesystem Type 1024-blocks Used Available Capacity Mounted on pool_0 zfs 1124437248 128 1124437120 1% /mnt/pool_0 pool_0/media zfs 1746713472 622276352 1124437120 36% /mnt/pool_0/media pool_0/cloud zfs 1263583360 139146240 1124437120 12% /mnt/pool_0/cloud ```