i have setup an lxc container on proxmox and there are zfs tools being installed there accidentally because of package dependencies inside container
i accidentally used zfs commands (wrong console) and the behaviour is a little bit weird, as zfs and zpool commands hang for a while and need 10 seconds to fail
apparently, zfs and zpool command try to open /dev/zfs for a thousand times
why so inefficient and why so hard retrying ?
couldn'd zfs/zpool simply without less effort or even detect when run inside a container (and print a notice/warning) ?
why retry a thousand times to access a non existant device file ?
i could simply print the information of missing device on the first fail and then retry for some more time in a less agressive way, in a more verbose way
zfs list
.../dev/zfs is missing. retry 1s.....2s.....3s....
or whatever.
root@pbsct:/backups2# time zfs list
/dev/zfs and /proc/self/mounts are required.
Try running 'udevadm trigger' and 'mount -t proc proc /proc' as root.
real 0m10.016s
user 0m0.022s
sys 0m0.100s
root@pbsct:/backups2# time zpool status
/dev/zfs and /proc/self/mounts are required.
Try running 'udevadm trigger' and 'mount -t proc proc /proc' as root.
real 0m10.019s
user 0m0.019s
sys 0m0.114s
root@pbsct:/backups2# strace zfs list 2>&1 |grep openat |grep "No such file"|wc -l
1010
root@pbsct:/backups2# strace zpool status 2>&1 |grep openat |grep "No such file"|wc -l
1006
syscall which is repeated >1000 times is:
openat(AT_FDCWD, "/dev/zfs", O_RDWR|O_CLOEXEC) = -1 ENOENT (No such file or directory)
i have setup an lxc container on proxmox and there are zfs tools being installed there accidentally because of package dependencies inside container
i accidentally used zfs commands (wrong console) and the behaviour is a little bit weird, as zfs and zpool commands hang for a while and need 10 seconds to fail
apparently, zfs and zpool command try to open /dev/zfs for a thousand times
why so inefficient and why so hard retrying ?
couldn'd zfs/zpool simply without less effort or even detect when run inside a container (and print a notice/warning) ?
why retry a thousand times to access a non existant device file ?
i could simply print the information of missing device on the first fail and then retry for some more time in a less agressive way, in a more verbose way
zfs list .../dev/zfs is missing. retry 1s.....2s.....3s....
or whatever.
root@pbsct:/backups2# time zfs list /dev/zfs and /proc/self/mounts are required. Try running 'udevadm trigger' and 'mount -t proc proc /proc' as root.
real 0m10.016s user 0m0.022s sys 0m0.100s
root@pbsct:/backups2# time zpool status /dev/zfs and /proc/self/mounts are required. Try running 'udevadm trigger' and 'mount -t proc proc /proc' as root.
real 0m10.019s user 0m0.019s sys 0m0.114s
root@pbsct:/backups2# strace zfs list 2>&1 |grep openat |grep "No such file"|wc -l 1010
root@pbsct:/backups2# strace zpool status 2>&1 |grep openat |grep "No such file"|wc -l 1006
syscall which is repeated >1000 times is:
openat(AT_FDCWD, "/dev/zfs", O_RDWR|O_CLOEXEC) = -1 ENOENT (No such file or directory)