Open opoplawski opened 1 year ago
In https://github.com/fail2ban/fail2ban/issues/2256#issuecomment-439941726 I asked about 3 ways how it would "fixable". Well, not quite correct - https://github.com/fail2ban/fail2ban/issues/3027#issuecomment-846267332 could be another one. Which one would you prefer?
Hmm, I was thinking that perhaps there was something wrong with the way the fail2ban was removing the watch. But perhaps it is really an issue in pyinotify. Though hard to tell at this point. Does this seem like a possibility:
If so, is there some way we can make sure all of the events are processed before deleting the watch?
If so, is there some way we can make sure all of the events are processed before deleting the watch?
Sure, one can retard watch deletion a bit. Just I'm unsure it'd help, because then the handler will be closed earlier and a WatchManager work asynchronously, so after all it should not introduce vice versa situation. I'll check whether it is somehow possible to unregister watch object without to pause WatchManager (to avoid misses).
Same here.
Fail2ban version 1.1.0.1, AlmaLinux 8.
Installed with:
python3 setup.py install
In fail2ban.log I see that after logrotate:
2023-06-20 03:29:01,581 fail2ban.server [1129011]: INFO rollover performed on /var/log/fail2ban.log
2023-06-20 03:29:04,198 fail2ban.filterpyinotify[1129011]: WARNING Unable to retrieve Watch object associated to <_RawEvent cookie=0 mask=0x2 name='' wd=9 >
2023-06-20 03:29:06,054 fail2ban.filterpyinotify[1129011]: WARNING Unable to retrieve Watch object associated to <_RawEvent cookie=0 mask=0x2 name='' wd=9 >
2023-06-20 03:29:11,543 fail2ban.filterpyinotify[1129011]: WARNING Unable to retrieve Watch object associated to <_RawEvent cookie=0 mask=0x2 name='' wd=9 >
2023-06-20 03:29:44,477 fail2ban.filterpyinotify[1129011]: WARNING Unable to retrieve Watch object associated to <_RawEvent cookie=0 mask=0x2 name='' wd=9 >
and random times before and after that has no fail2ban actions:
2023-06-20 07:00:37,222 fail2ban.filterpyinotify[1129011]: WARNING Unable to retrieve Watch object associated to <_RawEvent cookie=0 mask=0x2 name='' wd=10 >
2023-06-20 07:00:38,422 fail2ban.filterpyinotify[1129011]: WARNING Unable to retrieve Watch object associated to <_RawEvent cookie=0 mask=0x2 name='' wd=10 >
2023-06-20 07:00:44,499 fail2ban.filterpyinotify[1129011]: WARNING Unable to retrieve Watch object associated to <_RawEvent cookie=0 mask=0x2 name='' wd=10 >
2023-06-20 07:00:56,059 fail2ban.filterpyinotify[1129011]: WARNING Unable to retrieve Watch object associated to <_RawEvent cookie=0 mask=0x2 name='' wd=10 >
And one more thing - when I copy systemctl service file with:
cp /opt/fail2ban/build/fail2ban.service /etc/systemd/system/
in service file contain line:
Environment="PYTHONNOUSERSITE=1"
and reloading or restart fail2ban via systemctl - fail2ban doesn't start. When I remove the line abow with Environment - its ok.
Same here.
The "issue" is already confirmed... And as one can see still open (since there is no simple solution for that, because the warning is throwing from pyinotify module itself (after we allowed to log from there). Your comment seems not provide new info, did it?
Environment="PYTHONNOUSERSITE=1" fail2ban doesn't start. When I remove the line abow with Environment - its ok.
It has nothing to do with the issue here. It looks like something (fail2ban modules or some dependencies) got installed as user modules. Then it's clear that disabling them would cause erroneous start of fail2ban.
I have two VPS, Ubuntu 22.04.4 LTS On my old VPS, when i turn loglevel = DEBUG and look sudo lnav /var/log/fail2ban.log - i see many messages "/var/log/fail2ban.log│2024-07-12 08:41:38,736 fail2ban.filterpyinotify[17268]: DEBUG Event queue size: 16 │/var/log/fail2ban.log│2024-07-12 08:41:38,737 fail2ban.filterpyinotify[17268]: DEBUG <_RawEvent cookie=0 mask=0x2 name='' wd=2 > "
On my second VPS i don't see such messages in DEBUG mode. How to check, why i see a lot of messages only on first VPS?
Details:
Temporary solution: change in /etc/fail2ban/jail.conf backend = systemd and restart fail2ban.
Environment:
The issue:
On service/machine shutdown we see this message in the logs. Similar to #2256 but not related to logrotate. I know it's just a warning but it shows up in logwatch and it seems like during normal operation this shouldn't be generated.
Steps to reproduce
I suspect the use of the recidive jail monitoring /var/log/fail2ban.log may be the trigger, but hard to say. It doesn't happen with every shutdown.
Expected behavior
No WARNINGs
Observed behavior