Closed EliasLucky closed 1 year ago
To be honest, even if this problem is on my side that's not a big problem for me, I still can try to make my own shell script, which will get string output from executing fastfetch and change it with adding disk usage information.
Thanks for reporting this, or we won't know this bug thus it won't get fixed.
Some information I need:
cat /proc/mounts
fastfetch -l none -s disk --disk-show-hidden --disk-show-subvolumes --disk-show-unknown
fastfetch -l none -s disk --disk-folders /
You're welcome, that's nice to hear that somehow I am helping this project grow up. Here is information that you needed. Ask me for whatever you want, if you want I can post strace output of executing "fastfetch -l none -s disk --disk-folders /".
Output of cat /proc/mounts
:
proc /proc proc rw,nosuid,nodev,noexec,relatime 0 0
sys /sys sysfs rw,nosuid,nodev,noexec,relatime 0 0
dev /dev devtmpfs rw,nosuid,relatime,size=3984332k,nr_inodes=996083,mode=755,inode64 0 0
run /run tmpfs rw,nosuid,nodev,relatime,mode=755,inode64 0 0
efivarfs /sys/firmware/efi/efivars efivarfs rw,nosuid,nodev,noexec,relatime 0 0
zroot/ROOT/default / zfs rw,nodev,relatime,xattr,posixacl 0 0
securityfs /sys/kernel/security securityfs rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /dev/shm tmpfs rw,nosuid,nodev,inode64 0 0
devpts /dev/pts devpts rw,nosuid,noexec,relatime,gid=5,mode=620,ptmxmode=000 0 0
cgroup2 /sys/fs/cgroup cgroup2 rw,nosuid,nodev,noexec,relatime,nsdelegate,memory_recursiveprot 0 0
pstore /sys/fs/pstore pstore rw,nosuid,nodev,noexec,relatime 0 0
bpf /sys/fs/bpf bpf rw,nosuid,nodev,noexec,relatime,mode=700 0 0
systemd-1 /proc/sys/fs/binfmt_misc autofs rw,relatime,fd=34,pgrp=1,timeout=0,minproto=5,maxproto=5,direct,pipe_ino=12484 0 0
hugetlbfs /dev/hugepages hugetlbfs rw,relatime,pagesize=2M 0 0
debugfs /sys/kernel/debug debugfs rw,nosuid,nodev,noexec,relatime 0 0
mqueue /dev/mqueue mqueue rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /tmp tmpfs rw,nosuid,nodev,size=4000572k,nr_inodes=1048576,inode64 0 0
tracefs /sys/kernel/tracing tracefs rw,nosuid,nodev,noexec,relatime 0 0
configfs /sys/kernel/config configfs rw,nosuid,nodev,noexec,relatime 0 0
ramfs /run/credentials/systemd-tmpfiles-setup-dev.service ramfs ro,nosuid,nodev,noexec,relatime,mode=700 0 0
fusectl /sys/fs/fuse/connections fusectl rw,nosuid,nodev,noexec,relatime 0 0
ramfs /run/credentials/systemd-sysctl.service ramfs ro,nosuid,nodev,noexec,relatime,mode=700 0 0
zroot/data/home /home zfs rw,nodev,relatime,xattr,posixacl 0 0
zroot/var/log /var/log zfs rw,nodev,relatime,xattr,posixacl 0 0
zroot/var/lib/docker /var/lib/docker zfs rw,nodev,relatime,xattr,posixacl 0 0
/dev/sda1 /boot/efi vfat rw,relatime,fmask=0022,dmask=0022,codepage=437,iocharset=ascii,shortname=mixed,utf8,errors=remount-ro 0 0
ramfs /run/credentials/systemd-tmpfiles-setup.service ramfs ro,nosuid,nodev,noexec,relatime,mode=700 0 0
binfmt_misc /proc/sys/fs/binfmt_misc binfmt_misc rw,nosuid,nodev,noexec,relatime 0 0
tmpfs /run/user/1000 tmpfs rw,nosuid,nodev,relatime,size=800112k,nr_inodes=200028,mode=700,uid=1000,gid=1000,inode64 0 0
zroot/var/lib/libvirt /var/lib/libvirt zfs rw,nodev,relatime,xattr,posixacl 0 0
Output of fastfetch -l none -s disk --disk-show-hidden --disk-show-subvolumes --disk-show-unknown
it just shows boot partition
Disk (/boot/efi): 224.00 KiB / 598.79 MiB (0%) - vfat [Hidden]
Output of fastfetch -l none -s disk --disk-folders /
absolutely nothing, just empty line.
Thanks, that was helpful.
Please try the latest build. Please also paste output of fastfetch -s disk --format json
here.
I've tried to build fastfetch from latest dev branch, and tried to use env NO_CONFIG=1 ./fastfetch --structure Disk
, and everything works fine, now it shows size of my mounted zfs datasets. now it shows my mounted zfs datasets size.
Disk (/): 38.33 GiB / 327.56 GiB (12%) - zfs [External]
Disk (/home): 122.03 GiB / 411.26 GiB (30%) - zfs [External]
Disk (/var/lib/docker): 128.00 KiB / 289.23 GiB (0%) - zfs [External]
Disk (/var/lib/libvirt): 256.00 KiB / 289.23 GiB (0%) - zfs [External]
Disk (/var/log): 1.25 MiB / 289.23 GiB (0%) - zfs [External]
Here is the output from ./fastfetch -s disk --format json
:
[
{
"type": "Disk",
"result": [
{
"bytes": {
"available": 310554001408,
"free": 310554001408,
"total": 351715590144,
"used": 41161588736
},
"files": {
"total": 606748821,
"used": 197965,
"filesystem": "zfs"
},
"mountpoint": "/",
"mountFrom": "zroot/ROOT/default",
"name": "",
"type": [
"External"
]
},
{
"bytes": {
"available": 627646464,
"free": 627646464,
"total": 627875840,
"used": 229376
},
"files": {
"total": 0,
"used": 0,
"filesystem": "vfat"
},
"mountpoint": "/boot/efi",
"mountFrom": "/dev/sda1",
"name": "EFI system partition",
"type": [
"Hidden"
]
},
{
"bytes": {
"available": 310554001408,
"free": 310554001408,
"total": 441584844800,
"used": 131030843392
},
"files": {
"total": 606852964,
"used": 302108,
"filesystem": "zfs"
},
"mountpoint": "/home",
"mountFrom": "zroot/data/home",
"name": "",
"type": [
"External"
]
},
{
"bytes": {
"available": 310554001408,
"free": 310554001408,
"total": 310554132480,
"used": 131072
},
"files": {
"total": 606550862,
"used": 6,
"filesystem": "zfs"
},
"mountpoint": "/var/lib/docker",
"mountFrom": "zroot/var/lib/docker",
"name": "",
"type": [
"External"
]
},
{
"bytes": {
"available": 310554001408,
"free": 310554001408,
"total": 310554263552,
"used": 262144
},
"files": {
"total": 606550890,
"used": 34,
"filesystem": "zfs"
},
"mountpoint": "/var/lib/libvirt",
"mountFrom": "zroot/var/lib/libvirt",
"name": "",
"type": [
"External"
]
},
{
"bytes": {
"available": 310554001408,
"free": 310554001408,
"total": 310555312128,
"used": 1310720
},
"files": {
"total": 606550879,
"used": 23,
"filesystem": "zfs"
},
"mountpoint": "/var/log",
"mountFrom": "zroot/var/log",
"name": "",
"type": [
"External"
]
}
]
}
]
Well, zpools other than zroot/ROOT/default
should be detected as subvolumes. The problem is that I can't debug it...
And they should not be detected as external.
Please try it again. Noting subvolumes are hidden by default. You may show them with --disk-show-subvolumes
Well how I understand it should just show only /
volume information a.k.a zroot/ROOT/default
, without showing any other datasets (subvolumes).
Well, I've tried to copy and build again latest github dev branch, and after doing env NO_CONFIG=1 ./fastfetch --structure Disk
it still shows subvolumes and still shows them as external:
Disk (/): 38.33 GiB / 327.52 GiB (12%) - zfs [External]
Disk (/home): 122.07 GiB / 411.26 GiB (30%) - zfs [External]
Disk (/var/lib/docker): 128.00 KiB / 289.19 GiB (0%) - zfs [External]
Disk (/var/lib/libvirt): 256.00 KiB / 289.19 GiB (0%) - zfs [External]
Disk (/var/log): 1.25 MiB / 289.19 GiB (0%) - zfs [External]
with adding --disk-show-subvolumes
a.k.a with env NO_CONFIG=1 ./fastfetch --structure Disk --disk-show-subvolumes
is still the same result:
Disk (/): 38.33 GiB / 327.52 GiB (12%) - zfs [External]
Disk (/home): 122.07 GiB / 411.26 GiB (30%) - zfs [External]
Disk (/var/lib/docker): 128.00 KiB / 289.19 GiB (0%) - zfs [External]
Disk (/var/lib/libvirt): 256.00 KiB / 289.19 GiB (0%) - zfs [External]
Disk (/var/log): 1.25 MiB / 289.19 GiB (0%) - zfs [External]
Output from ./fastfetch -s disk --format json
:
[
{
"type": "Disk",
"result": [
{
"bytes": {
"available": 310513631232,
"free": 310513631232,
"total": 351675219968,
"used": 41161588736
},
"files": {
"total": 606669909,
"used": 197965,
"filesystem": "zfs"
},
"mountpoint": "/",
"mountFrom": "zroot/ROOT/default",
"name": "",
"type": [
"External"
]
},
{
"bytes": {
"available": 627646464,
"free": 627646464,
"total": 627875840,
"used": 229376
},
"files": {
"total": 0,
"used": 0,
"filesystem": "vfat"
},
"mountpoint": "/boot/efi",
"mountFrom": "/dev/sda1",
"name": "EFI system partition",
"type": [
"Hidden"
]
},
{
"bytes": {
"available": 310513631232,
"free": 310513631232,
"total": 441584713728,
"used": 131071082496
},
"files": {
"total": 606775653,
"used": 303709,
"filesystem": "zfs"
},
"mountpoint": "/home",
"mountFrom": "zroot/data/home",
"name": "",
"type": [
"External"
]
},
{
"bytes": {
"available": 310513631232,
"free": 310513631232,
"total": 310513762304,
"used": 131072
},
"files": {
"total": 606471950,
"used": 6,
"filesystem": "zfs"
},
"mountpoint": "/var/lib/docker",
"mountFrom": "zroot/var/lib/docker",
"name": "",
"type": [
"External"
]
},
{
"bytes": {
"available": 310513631232,
"free": 310513631232,
"total": 310513893376,
"used": 262144
},
"files": {
"total": 606471978,
"used": 34,
"filesystem": "zfs"
},
"mountpoint": "/var/lib/libvirt",
"mountFrom": "zroot/var/lib/libvirt",
"name": "",
"type": [
"External"
]
},
{
"bytes": {
"available": 310513631232,
"free": 310513631232,
"total": 310514941952,
"used": 1310720
},
"files": {
"total": 606471967,
"used": 23,
"filesystem": "zfs"
},
"mountpoint": "/var/log",
"mountFrom": "zroot/var/log",
"name": "",
"type": [
"External"
]
}
]
}
]
I'm sorry that I pushed the commit to my fork repo by mistake. Please try it again
Oh okay lol, no problem, gimme a sec.
Now it works fine.
With env NO_CONFIG=1 ./fastfetch --structure Disk
it just shows information about zroot/ROOT/default
dataset a.k.a ("/") volume. And recognizes this volume as external.
Disk (/): 38.33 GiB / 327.52 GiB (12%) - zfs [External]
With env NO_CONFIG=1 ./fastfetch --structure Disk --disk-show-subvolumes
it shows information about zroot/ROOT/default
dataset a.k.a ("/") volume, and also shows information about other subvolumes:
Disk (/): 38.33 GiB / 327.52 GiB (12%) - zfs [External]
Disk (/home): 122.07 GiB / 411.26 GiB (30%) - zfs [Subvolume]
Disk (/var/lib/docker): 128.00 KiB / 289.19 GiB (0%) - zfs [Subvolume]
Disk (/var/lib/libvirt): 256.00 KiB / 289.19 GiB (0%) - zfs [Subvolume]
Disk (/var/log): 1.25 MiB / 289.19 GiB (0%) - zfs [Subvolume]
Here is also output of ./fastfetch -s disk --format json
if needed:
[
{
"type": "Disk",
"result": [
{
"bytes": {
"available": 310511534080,
"free": 310511534080,
"total": 351673122816,
"used": 41161588736
},
"files": {
"total": 606665917,
"used": 197965,
"filesystem": "zfs"
},
"mountpoint": "/",
"mountFrom": "zroot/ROOT/default",
"name": "",
"type": [
"External"
]
},
{
"bytes": {
"available": 627646464,
"free": 627646464,
"total": 627875840,
"used": 229376
},
"files": {
"total": 0,
"used": 0,
"filesystem": "vfat"
},
"mountpoint": "/boot/efi",
"mountFrom": "/dev/sda1",
"name": "EFI system partition",
"type": [
"Hidden"
]
},
{
"bytes": {
"available": 310511534080,
"free": 310511534080,
"total": 441584582656,
"used": 131073048576
},
"files": {
"total": 606771667,
"used": 303715,
"filesystem": "zfs"
},
"mountpoint": "/home",
"mountFrom": "zroot/data/home",
"name": "",
"type": [
"Subvolume"
]
},
{
"bytes": {
"available": 310511534080,
"free": 310511534080,
"total": 310511665152,
"used": 131072
},
"files": {
"total": 606467958,
"used": 6,
"filesystem": "zfs"
},
"mountpoint": "/var/lib/docker",
"mountFrom": "zroot/var/lib/docker",
"name": "",
"type": [
"Subvolume"
]
},
{
"bytes": {
"available": 310511534080,
"free": 310511534080,
"total": 310511796224,
"used": 262144
},
"files": {
"total": 606467986,
"used": 34,
"filesystem": "zfs"
},
"mountpoint": "/var/lib/libvirt",
"mountFrom": "zroot/var/lib/libvirt",
"name": "",
"type": [
"Subvolume"
]
},
{
"bytes": {
"available": 310511534080,
"free": 310511534080,
"total": 310512844800,
"used": 1310720
},
"files": {
"total": 606467975,
"used": 23,
"filesystem": "zfs"
},
"mountpoint": "/var/log",
"mountFrom": "zroot/var/log",
"name": "",
"type": [
"Subvolume"
]
}
]
}
]
Now zpool volumes should not be detected as external disks.
zpool volumes are not being detected as external disks. Except the main root ("/"). Everything works fine now, thank you for answering and fixing the issue!
Except the main root ("/").
What's the output? Do you have your system installed in a USB flash disk? If no, the root path should not be detected as external.
Hello, that's nice to see how your project grows up and I wish you more success in the future. But anyways, I've found a little bug, and to be honest, I am not even sure that this bug can be related to your project, maybe this is something on my side because I've seen other images in which fastfetch was displaying disk usage information, and maybe I mis-configured something or idk.
General description of bug:
I have Arch Linux system installed on root ZFS filesystem pool.
The main thing is. Fastfetch, just doesn't display any disk usage information. Absolutely nothing, even there is no label "Disk" in fastfetch output.
I've tried to use
env NO_CONFIG=1 fastfetch --structure Disk
, but still nothing, only logo being displayed. And that's kinda weird.Here is some information, that I think would be helpful about my filesystem configuration.
zfs list
output:zpool list
output: (there is like only one zfs pool, and it's called zroot)ls /dev/disk/by-label/
output:Often helpful information:
The content of the configuration file you use (if any)
even with
env NO_CONFIG=1 fastfetch --structure Disk --show-errors --stat --multithreading false --hide-curor false
it displays nohting, no errors, only just arch logo.Output of
fastfetch --list-features
:Here is command that I've tried with output below to take more details about this situation:
Output:
sleep 1
before runningfastfetch
work? no