Open l-const opened 3 weeks ago
Do you have a lot of files in trash?
Not really, just 4 small files.
Hey @mmstick , it seems this is happening only when run from source as seen in the video, the above cosmic-term is build from source and the watchers are not working. Note i found a script to check the numbers of used watchers and I do not see something worrying.
cosmic-files on master is 📦 v0.1.0 via 🦀 v1.82.0-nightly took 4s
❯ sudo find /proc/*/fd -lname anon_inode:inotify | cut -d/ -f3 | xargs -I '{}' -- ps --no-headers -o '%p %U %c' -p '{}' | uniq -c | sort -nr
7 122703 kostas cosmic-files
7 122676 kostas cosmic-files
7 122658 kostas cosmic-files
7 122609 kostas cosmic-files
7 121097 kostas cosmic-files
7 120725 kostas cosmic-files
7 120561 kostas cosmic-files
6 4334 kostas cosmic-term
6 3180 kostas cosmic-workspac
6 3080 kostas cosmic-panel
6 2973 kostas cosmic-comp
6 122744 kostas cosmic-files
5 23014 kostas cosmic-store
Seeing here that one process has 6 vs 7 watchers, maybe it is the "Trash" filepath. But again the new file in my ~/Music folder was not observed as seen in the video.
Update: the issue lies in the number of running instances after around 6-7 i can repoduce it constantly with every new cargo run --release
as seen in the video:
It seems there is an issue with the max_user_instances: 128 which is configured per user not the actual number of watches so the new instances failed to allocate watches for every filepath but i guess we only log error for the trash watcher. My understanding is that cosmic uses a lot of notify instances across the apps, so we reach the limit of that default config. I think it is configurable. After we reach the limit every new instance of the apps essentially can't allocate any watchers at all, so we have those issues. I did download and build this tool: https://github.com/mikesart/inotify-info . Here is the output where you see the limit set to 128 and have stopped at 125:
inotify-info/_release on master
❯ ./inotify-info
------------------------------------------------------------------------------
INotify Limits:
max_queued_events 16,384
max_user_instances 128
max_user_watches 121,786
------------------------------------------------------------------------------
Pid Uid App Watches Instances
3065 1000 cosmic-settings-daemon 63 1
3366 1000 xdg-desktop-portal-gtk 54 1
3074 1000 uresourced 37 1
27907 1000 nautilus 20 1
180287 1000 cosmic-files 10 7
180026 1000 cosmic-files 10 7
179706 1000 cosmic-files 10 7
179296 1000 cosmic-files 10 7
178715 1000 cosmic-files 10 7
178668 1000 cosmic-files 10 7
177935 1000 cosmic-files 10 7
177858 1000 cosmic-files 10 7
3782 1000 brave 10 1
27929 1000 gvfsd-trash 9 2
2973 1000 cosmic-comp 6 6
4334 1000 cosmic-term 6 6
3080 1000 cosmic-panel 6 6
3180 1000 cosmic-workspaces 6 6
2813 1000 dbus-broker-launch 5 1
3091 1000 cosmic-app-library 5 1
23014 1000 cosmic-store 5 5
3397 1000 cosmic-greeter 4 4
3769 1000 flatpak-session-helper 3 1
3880 1000 brave 3 1
3187 1000 cosmic-osd 3 3
3601 1000 evolution-source-registry 2 1
3130 1000 wireplumber 2 3
70131 1000 gpg-agent 2 2
27962 1000 tracker-miner-fs-3 2 2
4699 1000 gvfs-udisks2-volume-monitor 2 2
3327 1000 cosmic-applets 1 1
3272 1000 xdg-desktop-portal 1 1
27953 1000 gvfsd-recent 1 1
4726 1000 gvfs-afc-volume-monitor 1 1
3207 1000 xdg-desktop-portal-cosmic 1 1
3190 1000 cosmic-bg 1 1
3340 1000 dbus-broker-launch 1 1
3790 1000 flatpak-portal 1 1
3686 1000 evolution-addressbook-factory 1 1
3668 1000 evolution-calendar-factory 1 1
3657 1000 goa-daemon 1 1
3422 1000 cosmic-applets 1 1
------------------------------------------------------------------------------
Total inotify Watches: 347
Total inotify Instances: 125
------------------------------------------------------------------------------
I observed that nautilus and browsers keep the instances very low and when new instances are created they may just increase the watches. So this seems an issue, it is solvable with a config change to raise the limit but there may somewhere be an optimization as to how many inotify instances are created. I am not aware how these things work but I found this doc that kind of explains it: https://watchexec.github.io/docs/inotify-limits.html
Also, googled and found this: https://unix.stackexchange.com/questions/532709/what-is-exactly-difference-between-inotify-max-user-instances-and-max-user-watch . Quoting:
An "instance" is single file descriptor, returned by inotify_init(). A single inotify file descriptor can be used by one process or shared by multiple processes, so they are rationed per-user instead of per-process. A "watch" is a single file, observed by inotify instance. Each watch is unique, so they are also rationed per-user. If an application creates too many instances, it either starts too many processes (and does not share inotify file descriptors between processes), or it is just plain buggy — for example, it may leak open inotify descriptors (open and then forget about them without closing). There is also a possibility, that application is just poorly written, and uses multiple descriptors where one could suffice (you almost never need more than 1 inotify descriptor).
It's a sysctl parameter. sysctl fs.inotify.max_user_instances
. You can use sysctl
to change it, or set it permanently by adding it to a config in /etc/sysctl.d/
Ok I guess if you agree and you do not think this is actionable, do proceed and close the issue.
Cosmic-files version:
Issue/Bug description:
Steps to reproduce: I had opened at least 8 instances of cosmic-files many of them were build from source with
bash cargo run --release
. I know this error/warning has been reported again in the mattermost pop-os channels. Expected behavior: Not have this warning as we might not being able to get notification about filesystem changes through the notify package that is used.Other notes: We may potentially "leaking" watchers if we need to de-register them : https://github.com/pop-os/cosmic-files/blob/master/src/app.rs#L643. Maybe we do not do that in all code-paths.