Canop / dysk

A linux utility to get information on filesystems, like df but better
https://dystroy.org/dysk
MIT License
890 stars 25 forks source link

ZFS mount points not shown by default #32

Closed lordrasmus closed 2 years ago

lordrasmus commented 2 years ago

zfs mounts are not shown by default on linux

but when running with -a the zfs mount points are shown

and the disk column is empty on zfs filesystems

Canop commented 2 years ago

Can you please give me the output of lfs, lfs -a, and lfs -a -j ?

lordrasmus commented 2 years ago

lfs1 lfs_2 lfs.json.gz

yes here is the output

Canop commented 2 years ago

It looks like I have some investigation to do to support ZFS and understand how it describes its block devices, which means installing ZFS on one of my systems :\

Canop commented 2 years ago

Just in case, could you output tree /sys/block and cat /proc/self/mountinfo ?

lordrasmus commented 2 years ago

here is the output

/sys/block \u251c\u2500\u2500 loop0 -> ../devices/virtual/block/loop0 \u251c\u2500\u2500 loop1 -> ../devices/virtual/block/loop1 \u251c\u2500\u2500 loop10 -> ../devices/virtual/block/loop10 \u251c\u2500\u2500 loop11 -> ../devices/virtual/block/loop11 \u251c\u2500\u2500 loop12 -> ../devices/virtual/block/loop12 \u251c\u2500\u2500 loop13 -> ../devices/virtual/block/loop13 \u251c\u2500\u2500 loop14 -> ../devices/virtual/block/loop14 \u251c\u2500\u2500 loop15 -> ../devices/virtual/block/loop15 \u251c\u2500\u2500 loop16 -> ../devices/virtual/block/loop16 \u251c\u2500\u2500 loop17 -> ../devices/virtual/block/loop17 \u251c\u2500\u2500 loop18 -> ../devices/virtual/block/loop18 \u251c\u2500\u2500 loop19 -> ../devices/virtual/block/loop19 \u251c\u2500\u2500 loop2 -> ../devices/virtual/block/loop2 \u251c\u2500\u2500 loop20 -> ../devices/virtual/block/loop20 \u251c\u2500\u2500 loop21 -> ../devices/virtual/block/loop21 \u251c\u2500\u2500 loop22 -> ../devices/virtual/block/loop22 \u251c\u2500\u2500 loop23 -> ../devices/virtual/block/loop23 \u251c\u2500\u2500 loop24 -> ../devices/virtual/block/loop24 \u251c\u2500\u2500 loop3 -> ../devices/virtual/block/loop3 \u251c\u2500\u2500 loop4 -> ../devices/virtual/block/loop4 \u251c\u2500\u2500 loop5 -> ../devices/virtual/block/loop5 \u251c\u2500\u2500 loop6 -> ../devices/virtual/block/loop6 \u251c\u2500\u2500 loop7 -> ../devices/virtual/block/loop7 \u251c\u2500\u2500 loop8 -> ../devices/virtual/block/loop8 \u251c\u2500\u2500 loop9 -> ../devices/virtual/block/loop9 \u251c\u2500\u2500 nvme0n1 -> ../devices/pci0000:00/0000:00:01.1/0000:01:00.0/nvme/nvme0/nvme0n1 \u251c\u2500\u2500 nvme1n1 -> ../devices/pci0000:00/0000:00:01.3/0000:02:00.2/0000:03:04.0/0000:08:00.0/nvme/nvme1/nvme1n1 \u251c\u2500\u2500 sda -> ../devices/pci0000:00/0000:00:01.3/0000:02:00.1/ata1/host0/target0:0:0/0:0:0:0/block/sda \u251c\u2500\u2500 sdb -> ../devices/pci0000:00/0000:00:01.3/0000:02:00.1/ata3/host2/target2:0:0/2:0:0:0/block/sdb \u251c\u2500\u2500 sdc -> ../devices/pci0000:00/0000:00:01.3/0000:02:00.1/ata5/host4/target4:0:0/4:0:0:0/block/sdc \u2514\u2500\u2500 zram0 -> ../devices/virtual/block/zram0

cat /proc/self/mountinfo 24 31 0:22 / /sys rw,nosuid,nodev,noexec,relatime shared:7 - sysfs sysfs rw 25 31 0:23 / /proc rw,nosuid,nodev,noexec,relatime shared:13 - proc proc rw 26 31 0:5 / /dev rw,nosuid,relatime shared:2 - devtmpfs udev rw,size=16363092k,nr_inodes=4090773,mode=755,inode64 27 26 0:24 / /dev/pts rw,nosuid,noexec,relatime shared:3 - devpts devpts rw,gid=5,mode=620,ptmxmode=000 28 31 0:25 / /run rw,nosuid,nodev,noexec,relatime shared:5 - tmpfs tmpfs rw,size=3281160k,mode=755,inode64 31 1 0:27 /Ubuntu21.10 / rw,relatime shared:1 - btrfs /dev/nvme0n1p6 rw,ssd,space_cache,subvolid=2027,subvol=/Ubuntu21.10 30 24 0:6 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime shared:8 - securityfs securityfs rw 32 26 0:30 / /dev/shm rw,nosuid,nodev shared:4 - tmpfs tmpfs rw,inode64 33 28 0:31 / /run/lock rw,nosuid,nodev,noexec,relatime shared:6 - tmpfs tmpfs rw,size=5120k,inode64 34 24 0:32 / /sys/fs/cgroup rw,nosuid,nodev,noexec,relatime shared:9 - cgroup2 cgroup2 rw,nsdelegate,memory_recursiveprot 35 24 0:33 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime shared:10 - pstore pstore rw 36 24 0:34 / /sys/firmware/efi/efivars rw,nosuid,nodev,noexec,relatime shared:11 - efivarfs efivarfs rw 37 24 0:35 / /sys/fs/bpf rw,nosuid,nodev,noexec,relatime shared:12 - bpf none rw,mode=700 38 25 0:36 / /proc/sys/fs/binfmt_misc rw,relatime shared:14 - autofs systemd-1 rw,fd=29,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=36728 39 26 0:37 / /dev/hugepages rw,relatime shared:15 - hugetlbfs hugetlbfs rw,pagesize=2M 40 26 0:20 / /dev/mqueue rw,nosuid,nodev,noexec,relatime shared:16 - mqueue mqueue rw 41 24 0:7 / /sys/kernel/debug rw,nosuid,nodev,noexec,relatime shared:17 - debugfs debugfs rw 42 24 0:12 / /sys/kernel/tracing rw,nosuid,nodev,noexec,relatime shared:18 - tracefs tracefs rw 43 28 0:38 / /run/rpc_pipefs rw,relatime shared:19 - rpc_pipefs sunrpc rw 44 24 0:39 / /sys/fs/fuse/connections rw,nosuid,nodev,noexec,relatime shared:20 - fusectl fusectl rw 45 24 0:21 / /sys/kernel/config rw,nosuid,nodev,noexec,relatime shared:21 - configfs configfs rw 46 25 0:40 / /proc/fs/nfsd rw,relatime shared:22 - nfsd nfsd rw 196 31 7:0 / /snap/bare/5 ro,nodev,relatime shared:59 - squashfs /dev/loop0 ro 153 31 7:2 / /snap/core20/1328 ro,nodev,relatime shared:81 - squashfs /dev/loop2 ro 264 31 0:27 / /mnt/m2 rw,relatime shared:84 - btrfs /dev/nvme0n1p6 rw,ssd,space_cache,subvolid=5,subvol=/ 265 31 0:27 /config_files/network /etc/systemd/network rw,relatime shared:87 - btrfs /dev/nvme0n1p6 rw,ssd,space_cache,subvolid=2028,subvol=/config_files/network 263 31 7:3 / /snap/codechecker/6 ro,nodev,relatime shared:90 - squashfs /dev/loop3 ro 151 31 7:6 / /snap/gtk-common-themes/1519 ro,nodev,relatime shared:93 - squashfs /dev/loop6 ro 152 31 7:5 / /snap/firefox/973 ro,nodev,relatime shared:99 - squashfs /dev/loop5 ro 158 31 7:7 / /snap/snap-store/547 ro,nodev,relatime shared:102 - squashfs /dev/loop7 ro 251 31 259:3 / /boot/efi rw,relatime shared:105 - vfat /dev/nvme0n1p2 rw,fmask=0077,dmask=0077,codepage=437,iocharset=iso8859-1,shortname=mixed,errors=remount-ro 178 28 0:45 / /run/qemu rw,nosuid,nodev,relatime shared:111 - tmpfs tmpfs rw,mode=755,inode64 438 31 0:44 / /mnt/ssd rw,relatime shared:114 - btrfs /dev/sda1 rw,ssd,space_cache,subvolid=5,subvol=/ 446 31 0:43 / /mnt/m2_2 rw,noatime shared:117 - btrfs /dev/nvme1n1p3 rw,ssd,space_cache,subvolid=5,subvol=/ 437 31 7:9 / /snap/core18/2253 ro,nodev,relatime shared:120 - squashfs /dev/loop9 ro 461 31 7:10 / /snap/snap-store/558 ro,nodev,relatime shared:123 - squashfs /dev/loop10 ro 552 31 7:11 / /snap/gnome-3-38-2004/87 ro,nodev,relatime shared:126 - squashfs /dev/loop11 ro 646 31 7:12 / /snap/gnome-3-34-1804/77 ro,nodev,relatime shared:129 - squashfs /dev/loop12 ro 656 31 7:13 / /snap/youtube-dl/4630 ro,nodev,relatime shared:132 - squashfs /dev/loop13 ro 666 31 7:14 / /snap/gnome-3-38-2004/99 ro,nodev,relatime shared:135 - squashfs /dev/loop14 ro 676 31 7:15 / /snap/codechecker/7 ro,nodev,relatime shared:138 - squashfs /dev/loop15 ro 686 31 7:16 / /snap/gnome-3-34-1804/72 ro,nodev,relatime shared:141 - squashfs /dev/loop16 ro 696 31 7:17 / /snap/core18/2284 ro,nodev,relatime shared:144 - squashfs /dev/loop17 ro 706 31 7:18 / /snap/core20/1270 ro,nodev,relatime shared:147 - squashfs /dev/loop18 ro 726 31 7:20 / /snap/youtube-dl/4572 ro,nodev,relatime shared:153 - squashfs /dev/loop20 ro 901 38 0:50 / /proc/sys/fs/binfmt_misc rw,nosuid,nodev,noexec,relatime shared:159 - binfmt_misc binfmt_misc rw 322 31 0:27 /Ubuntu21.10/var/lib/docker/btrfs /var/lib/docker/btrfs rw,relatime shared:1 - btrfs /dev/nvme0n1p6 rw,ssd,space_cache,subvolid=2027,subvol=/Ubuntu21.10 347 31 0:58 / /mnt/zfs rw shared:763 - zfs data_hd rw,xattr,noacl 372 347 0:61 / /mnt/zfs/entwicklung rw shared:776 - zfs data_hd/entwicklung rw,xattr,noacl 397 347 0:59 / /mnt/zfs/VMS rw shared:789 - zfs data_hd/VMS rw,xattr,noacl 422 347 0:60 / /mnt/zfs/Daten rw shared:802 - zfs data_hd/Daten rw,xattr,noacl 466 347 0:62 / /mnt/zfs/build_tmp rw,noatime shared:815 - zfs data_hd/build_tmp rw,xattr,noacl 495 31 0:65 / /mnt/rasiserver rw,noatime shared:828 - nfs4 10.100.102.13:/mnt/btrfs rw,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.100.102.45,local_lock=none,addr=10.100.102.13 2016 28 0:67 / /run/user/1000 rw,nosuid,nodev,relatime shared:1004 - tmpfs tmpfs rw,size=3281160k,nr_inodes=820290,mode=700,uid=1000,gid=1000,inode64 2017 31 7:22 / /snap/rustup/1027 ro,nodev,relatime shared:1061 - squashfs /dev/loop22 ro 2109 31 7:23 / /snap/snapd/14978 ro,nodev,relatime shared:1108 - squashfs /dev/loop23 ro 241 31 7:19 / /snap/core/12725 ro,nodev,relatime shared:150 - squashfs /dev/loop19 ro 2171 2016 0:69 / /run/user/1000/gvfs rw,nosuid,nodev,relatime shared:915 - fuse.gvfsd-fuse gvfsd-fuse rw,user_id=1000,group_id=1000 147 2016 0:63 / /run/user/1000/doc rw,nosuid,nodev,relatime shared:78 - fuse.portal portal rw,user_id=1000,group_id=1000 148 495 0:64 / /mnt/rasiserver/data rw,noatime shared:498 - nfs4 10.100.102.13:/mnt/btrfs/data rw,vers=4.2,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.100.102.45,local_lock=none,addr=10.100.102.13 2243 31 7:1 / /snap/firefox/996 ro,nodev,relatime shared:908 - squashfs /dev/loop1 ro 149 28 0:25 /snapd/ns /run/snapd/ns rw,nosuid,nodev,noexec,relatime - tmpfs tmpfs rw,size=3281160k,mode=755,inode64 2551 149 0:4 mnt:[4026533985] /run/snapd/ns/firefox.mnt rw - nsfs nsfs rw

and the zfs pool looks like this

zpool list -v NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data_hd 7.25T 1.36T 5.89T - - 3% 18% 1.69x ONLINE - sdb 3.62T 704G 2.94T - - 4% 19.0% - ONLINE sdc 3.62T 692G 2.95T - - 3% 18.6% - ONLINE logs - - - - - - - - - nvme1n1p1 9.50G 40K 9.50G - - 0% 0.00% - ONLINE cache - - - - - - - - - sda4 33.2G 27.3G 5.94G - - 0% 82.1% - ONLINE nvme1n1p2 200G 188G 11.6G - - 0% 94.2% - ONLINE

data_hd is a combination of multiple devices so i gues displaying the disk type will not be so easy

Canop commented 2 years ago

data_hd is a combination of multiple devices so i gues displaying the disk type will not be so easy

That might be the crust of it.

I don't know zfs well enough, but if the underlying stats are, as I imagine, not redundant with other ones, I might just set the disk type as "ZFS" when no exclusive block device is found and the fs type is "zfs" and then display those filesystems among the "normal" ones.

lordrasmus commented 2 years ago

i think that whould be the best for now

probably there is a way the read the block devices which are used by a zfs pool / filesystem but i think that needs some research how that can be done

Canop commented 2 years ago

Would you be able to test a specific binary if I prepare one tonight ?

(or if you're a rust programmer and want to have a look at the inside, I may publish the branch, too)

lordrasmus commented 2 years ago

i'm a programmer but i haven't used rust till now

but checking out a branch and building shouldn't be a problem

Canop commented 2 years ago

So let's try to do it the programmer way :)

(but you'll have to clean a little your disks if there are the ones I see on top of this issue: rust produces big compilation artifacts)

lordrasmus commented 2 years ago

no problem on the zfs fs i have TB of space free :)

Canop commented 2 years ago

Ok, right now I've settled for not filling the "disk" field immediately but just showing zfs volumes among normal ones, even when there's no disk. You can try the "show-zfs" branch.

almereyda commented 2 years ago

probably there is a way the read the block devices which are used by a zfs pool / filesystem but i think that needs some research how that can be done

This information is available via zpool status: https://openzfs.github.io/openzfs-docs/man/8/zpool-status.8.html

Block devices are only used by ZPOOLs. ZFS datasets can only be created on and subsequently into a pools first-order dataset, plus below, and not to any kind of block devices. A special case is, when a ZVOL is created as dataset, which is (1) a block device (2) as a dataset (3) on a pool. But that's something what you'd specifically want to have that way.

fow0ryl commented 2 years ago

Hi, here is some output of my testmachine. One SSD as root system. One raidz1 zpool with 3 physical disks.

# zpool list -v `NAME SIZE ALLOC FREE CKPOINT EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT

zPoolW 10.9T 2.25T 8.63T - - 1% 20% 1.00x ONLINE - ata-TOSHIBA_MD04ACA400_26L9KGL2FSAA 3.62T 970G 2.68T - - 1% 26.1% - ONLINE ata-TOSHIBA_MD04ACA400_26LIK6C3FSAA 3.62T 576G 3.06T - - 1% 15.5% - ONLINE ata-TOSHIBA_MD04ACA400_26MIK6FDFSAA 3.62T 754G 2.89T - - 1% 20.3% - ONLINE`

# tree /sys/block/ ├── sda -> ../devices/pci0000:00/0000:00:12.0/ata1/host0/target0:0:0/0:0:0:0/block/sda ├── sdb -> ../devices/pci0000:00/0000:00:13.1/0000:02:00.0/ata3/host2/target2:0:0/2:0:0:0/block/sdb ├── sdc -> ../devices/pci0000:00/0000:00:13.1/0000:02:00.0/ata5/host4/target4:0:0/4:0:0:0/block/sdc └── sdd -> ../devices/pci0000:00/0000:00:13.1/0000:02:00.0/ata6/host5/target5:0:0/5:0:0:0/block/sdd

# cat /proc/self/mountinfo 22 28 0:21 / /proc rw,nosuid,nodev,noexec,relatime shared:5 - proc proc rw 23 28 0:22 / /sys rw,nosuid,nodev,noexec,relatime shared:6 - sysfs sys rw 24 28 0:5 / /dev rw,nosuid,relatime shared:2 - devtmpfs dev rw,size=8003136k,nr_inodes=2000784,mode=755,inode64 25 28 0:23 / /run rw,nosuid,nodev,relatime shared:12 - tmpfs run rw,mode=755,inode64 26 23 0:24 / /sys/firmware/efi/efivars rw,nosuid,nodev,noexec,relatime shared:7 - efivarfs efivarfs rw 28 1 0:25 / / rw,noatime shared:1 - btrfs /dev/sda2 rw,ssd,space_cache,subvolid=5,subvol=/ 27 23 0:6 / /sys/kernel/security rw,nosuid,nodev,noexec,relatime shared:8 - securityfs securityfs rw 29 24 0:27 / /dev/shm rw,nosuid,nodev shared:3 - tmpfs tmpfs rw,inode64 30 24 0:28 / /dev/pts rw,nosuid,noexec,relatime shared:4 - devpts devpts rw,gid=5,mode=620,ptmxmode=000 31 23 0:29 / /sys/fs/cgroup rw,nosuid,nodev,noexec,relatime shared:9 - cgroup2 cgroup2 rw,nsdelegate,memory_recursiveprot 32 23 0:30 / /sys/fs/pstore rw,nosuid,nodev,noexec,relatime shared:10 - pstore pstore rw 33 23 0:31 / /sys/fs/bpf rw,nosuid,nodev,noexec,relatime shared:11 - bpf bpf rw,mode=700 34 22 0:32 / /proc/sys/fs/binfmt_misc rw,relatime shared:13 - autofs systemd-1 rw,fd=30,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=14344 35 24 0:33 / /dev/hugepages rw,relatime shared:14 - hugetlbfs hugetlbfs rw,pagesize=2M 36 24 0:20 / /dev/mqueue rw,nosuid,nodev,noexec,relatime shared:15 - mqueue mqueue rw 37 23 0:7 / /sys/kernel/debug rw,nosuid,nodev,noexec,relatime shared:16 - debugfs debugfs rw 38 23 0:12 / /sys/kernel/tracing rw,nosuid,nodev,noexec,relatime shared:17 - tracefs tracefs rw 39 23 0:34 / /sys/fs/fuse/connections rw,nosuid,nodev,noexec,relatime shared:18 - fusectl fusectl rw 40 23 0:35 / /sys/kernel/config rw,nosuid,nodev,noexec,relatime shared:19 - configfs configfs rw 62 25 0:36 / /run/credentials/systemd-sysusers.service ro,nosuid,nodev,noexec,relatime shared:20 - ramfs none rw,mode=700 41 28 0:37 / /tmp rw,noatime shared:21 - tmpfs tmpfs rw,inode64 91 28 8:1 / /boot/efi rw,relatime shared:44 - vfat /dev/sda1 rw,fmask=0077,dmask=0077,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro 95 28 0:40 / /data/local rw,noatime shared:47 - btrfs /dev/sda3 rw,ssd,space_cache,subvolid=5,subvol=/ 94 28 0:47 / /data/postgres rw,noatime shared:49 - zfs zPoolW/local/postgres rw,xattr,posixacl 100 28 0:48 / /data/tvheadend rw,relatime shared:51 - zfs zPoolW/local/tvheadend rw,xattr,posixacl 103 28 0:42 / /data/users/moni rw,relatime shared:53 - zfs zPoolW/local/users/moni rw,xattr,posixacl 106 28 0:49 / /data/remote ro,relatime shared:55 - zfs zPoolW/remote ro,xattr,posixacl 109 28 0:45 / /data/nextcloud rw,relatime shared:57 - zfs zPoolW/local/nextcloud rw,xattr,posixacl 112 28 0:50 / /data/users/henning rw,relatime shared:59 - zfs zPoolW/local/users/henning rw,xattr,posixacl 115 28 0:43 / /data/tmp rw,relatime shared:61 - zfs zPoolW/local/tmp rw,xattr,posixacl 118 28 0:51 / /data/zdata rw,relatime shared:63 - zfs zPoolW/local/data rw,xattr,posixacl 121 28 0:46 / /data/images rw,relatime shared:65 - zfs zPoolW/local/images rw,xattr,posixacl 124 28 0:44 / /data/work rw,relatime shared:67 - zfs zPoolW/local/work rw,xattr,posixacl 127 106 0:56 / /data/remote/Daten-FAM ro,relatime shared:69 - zfs zPoolW/remote/Daten-FAM ro,xattr,posixacl 130 106 0:55 / /data/remote/Tmp ro,relatime shared:71 - zfs zPoolW/remote/Tmp ro,xattr,posixacl 133 106 0:57 / /data/remote/Software ro,relatime shared:73 - zfs zPoolW/remote/Software ro,xattr,posixacl 136 106 0:53 / /data/remote/Backups ro,relatime shared:75 - zfs zPoolW/remote/Backups ro,xattr,posixacl 139 106 0:58 / /data/remote/Daten-SEC ro,relatime shared:77 - zfs zPoolW/remote/Daten-SEC ro,xattr,posixacl 142 106 0:54 / /data/remote/Projekte ro,relatime shared:79 - zfs zPoolW/remote/Projekte ro,xattr,posixacl 145 106 0:52 / /data/remote/Medien ro,relatime shared:81 - zfs zPoolW/remote/Medien ro,xattr,posixacl 797 25 0:65 / /run/user/1000 rw,nosuid,nodev,relatime shared:452 - tmpfs tmpfs rw,size=1602708k,nr_inodes=400677,mode=700,uid=1000,gid=1000,inode64 822 797 0:67 / /run/user/1000/gvfs rw,nosuid,nodev,relatime shared:515 - fuse.gvfsd-fuse gvfsd-fuse rw,user_id=1000,group_id=1000 1235 25 0:77 / /run/user/0 rw,nosuid,nodev,relatime shared:387 - tmpfs tmpfs rw,size=1602708k,nr_inodes=400677,mode=700,inode64 1283 34 0:78 / /proc/sys/fs/binfmt_misc rw,nosuid,nodev,noexec,relatime shared:708 - binfmt_misc binfmt_misc rw

Maybe it's a idea to show then name of the zPool as "disk", since it is possible to use many (and different types) of disks in a zpool (vdevs). And you are able to run multiple zpools on one physical device too.

Showing detailed info may not work if used as root zpool status -c media Can't run -c with root privileges unless ZPOOL_SCRIPTS_AS_ROOT is set.

But it is fine as user ... `zpool status -c media pool: zPoolW state: ONLINE status: Some supported and requested features are not enabled on the pool. The pool can still be used, but some features are unavailable. action: Enable all features using 'zpool upgrade'. Once this is done, the pool may no longer be accessible by software that does not support the features. See zpool-features(7) for details. scan: scrub in progress since Wed Feb 23 01:08:10 2022 1.53T scanned at 819M/s, 745G issued at 391M/s, 2.25T total 0B repaired, 32.38% done, 01:07:59 to go config:

    NAME                                   STATE     READ WRITE CKSUM  media
    zPoolW                                 ONLINE       0     0     0
      ata-TOSHIBA_MD04ACA400_26L9KGL2FSAA  ONLINE       0     0     0    hdd
      ata-TOSHIBA_MD04ACA400_26LIK6C3FSAA  ONLINE       0     0     0    hdd
      ata-TOSHIBA_MD04ACA400_26MIK6FDFSAA  ONLINE       0     0     0    hdd

`

Please provide a binary for amd64 of your zfs-test-branch

Henning

lordrasmus commented 2 years ago

ok zf is shown now

but the sizes are different

zfs

Canop commented 2 years ago

Ifs, like most modern tools, uses SI units by default.

If you want df to output with SI units, use df -H.

If you want lfs to output in old units, use lfs --units binary

lordrasmus commented 2 years ago

ok then it looks more like df

the values of the zfs tool are a bit different but i think thats not so important

zfs2