Closed doge95 closed 3 years ago
# wc -l *
422486 0.log
777520 0.log.20210721-164752
1200006 total
It looks like all the logs in 0.log file are duplicated. 777,520+422,486*2-6(nginx starting logs)=1,622,486, which matches the observed number of log entries in Kibana.
I test both read_bytes_limit_per_second=100000, 500000. Found the same issue of duplicated logs.
Thanks for your report. I've confirmed the bug.
Thanks for your report. I've confirmed the bug.
3466 will fix it.
Hey @ashie, thanks for the quick confirmation. May I ask when will it be released?
@doge95 released as v1.13.3.
https://github.com/fluent/fluentd/blob/master/CHANGELOG.md#bug-fix
@doge95 released as v1.13.3.
https://github.com/fluent/fluentd/blob/master/CHANGELOG.md#bug-fix
Hey @kenhys , thanks a lot for the quick fix! I have tested it out and not seeing this issue any more.
Describe the bug
Continue on https://github.com/fluent/fluentd/issues/3434. I followed @ashie suggestion and did the stress test again on our EFK stack with read_bytes_limit_per_second parameter and Fluentd version
v1.13.2
. However, I found that logs are duplicated in Elasticsearch.To Reproduce
Container Runtime Version:
containerd://1.4
Kubelet logging configuration:--container-log-max-files=50 --container-log-max-size=100Mi
Expected behavior
Expect Kibana to also receive 1,200,000 logs. However, it receives 1,622,486 entires.
Your Environment
Your Configuration
Fluentd Chart
values.yaml
:Your Error Log
Additional context
When the log is rotated, 2 entries added to the pos file.
Pos file:
One example of the duplicated log:
The above log is found duplicated in Kibana. However, it only appears once in 0.log file.