Open uristernik opened 10 months ago
Thanks for your report!
Fluentd tail plugin was outputting
If you keep getting this message, please restart Fluentd
. After coming across https://github.com/fluent/fluentd/issues/3614, we implemented the workaround suggested there.
- changed
follow_inodes
totrue
- set
rotate_wait
to0
So, follow_inodes false
has a similar issue.
Could you please report an issue of follow_inodes false
in a new issue?
@daipom In this case I had follow_inodes true
Do you want me to open a new issue just for tracking?
@uristernik
Wasn't there a problem with follow_inodes false
as well?
I'd like to sort out each of follow_inodes false
problem and follow_inodes true
problem.
I'd like to know if there is any difference between follow_inodes false
and follow_inodes true
.
For example, whether the same resource leakage occurs when follow_inodes false
.
If there is no particular difference, we are fine with this for now. Thanks!
We are facing the same issue.
Error Message Skip update_watcher because watcher has been already updated by other inotify event path="/usr/local/logs/app/app.log" inode=20617294 inode_in_pos_file=0
We are using
read_from_head true
rotate_wait 30
follow_inodes true
enable_stat_watcher false
Memory keeps on gradually growing too! Any resolution on this?
@shadowshot-x Sorry for my late response. Thanks for your report. Could you please share the Fluentd (td-agent/fluent-package) version and OS?
Describe the bug
Fluentd tail plugin was outputting
If you keep getting this message, please restart Fluentd
. After coming across https://github.com/fluent/fluentd/issues/3614, we implemented the workaround suggested there.follow_inodes
totrue
rotate_wait
to0
Since than we are not seeing the original
If you keep getting this message, please restart Fluentd
but still seeing lots ofSkip update_watcher because watcher has been already updated by other inotify event
. This is paired with a pattern of memory leaking and gradual increase in CPU usage until a restart occurs.To mitigate this I added
pos_file_compaction_interval 20m
as suggested here but this had no affect on the resource usage.Related to https://github.com/fluent/fluentd/issues/3614. More specifically https://github.com/fluent/fluentd/issues/3614#issuecomment-1871484810
The suspicion is that some Watchers are not handled properly thus leaking and increasing CPU/Memory consumption until the next restart.
To Reproduce
Deploy fluentd (version v1.16.3-debian-forward-1.0) as a daemonset in a dynamic kubernetes cluster. Cluster is consisting of 50-100 nodes. This is the fluentd config:
Expected behavior
CPU / Memory should stay stable.
Your Environment
Your Configuration
Additional context
https://github.com/fluent/fluentd/issues/3614