Describe the bug
Large memory usage increment from about 300M to 10G since v2.1.9 (more than 10 times of v2.1.8)
The memory of container will keep increasing until close to 10G. Memory increases in a sawtooth pattern and then releases.
Under the same traffic test and the same configurations for fluentbit, the version v2.1.9 seems introducing such new memory usage pattern, this will cause more memory usage reserved for fluentbit to avoid OOM kill issue.
To Reproduce
Rubular link if applicable: 100%
Steps to reproduce the problem:
stress the logging traffic
monitor the memory usage
Expected behavior
Screenshots
Your Environment
Version used: v2.1.9
Configuration:
[INPUT]
name tail
tag event.kafka.ingress
alias kafka.ingress
buffer_chunk_size 1m
buffer_max_size 1m
read_from_head true
refresh_interval 5
rotate_wait 10
skip_empty_lines off
skip_long_lines true
key message
db /var/log/kafka.ingress.db
db.sync normal
db.locking true
db.journal_mode off
db.compare_filename true
path /var/log/app/*
exclude_path /var/log/app/xxx.log,/var/log/app/ingress/*.gz,/var/log/app/ingress/*.tgz
mem_buf_limit 20MB
parser json
ignore_older 11m
* Environment name and version (e.g. Kubernetes? What version?): kubernetes
![MicrosoftTeams-image (1)](https://github.com/fluent/fluent-bit/assets/59753717/ae22f87a-4c9c-4d61-a797-29ea03cf05c5)
Bug Report
Describe the bug Large memory usage increment from about 300M to 10G since v2.1.9 (more than 10 times of v2.1.8) The memory of container will keep increasing until close to 10G. Memory increases in a sawtooth pattern and then releases. Under the same traffic test and the same configurations for fluentbit, the version v2.1.9 seems introducing such new memory usage pattern, this will cause more memory usage reserved for fluentbit to avoid OOM kill issue.
To Reproduce
Rubular link if applicable: 100%
Steps to reproduce the problem:
Screenshots
Your Environment