Multiline filter is crashing on pods that generate a large amount of logs after reaching Emitter_Mem_Buf_Limit . On pods with a normal/low number of logs it works without problems
This is my configuration (i left only the relevant parts):
[INPUT]
Name tail
Path /var/log/containers/*.log
DB /var/log/containers/fluentbit_db.db
Parser docker
Tag kube.*
Mem_Buf_Limit 10MB
Buffer_Chunk_Size 256k
Buffer_Max_Size 256k
Skip_Long_Lines On
Refresh_Interval 1
multiline.parser docker,cri
......
[FILTER]
name multiline
Emitter_Mem_Buf_Limit 2.4GB
match kube.*
multiline.key_content log
multiline.parser java,go,python
[FILTER]
Name kubernetes
Buffer_Size 512k
Match kube.*
Merge_Log On
Merge_Log_Key log_json
Merge_Log_Trim On
Keep_Log On
K8S-Logging.Parser On
K8S-Logging.Exclude On
....
Your Environment
Version used: Fluent Bit v1.8.12
Environment name and version (e.g. Kubernetes? What version?): v1.20.4-eks
Cpu and memory load after enabling the multiline filter. I tried increasing memory limit and emitter_buffer limit to a few GB and the processes were still crashing
Additional context
Fluentbit container keeps crashing after it gets to the memory limit configured for that container. Also a lot of logs like
[error] [input:emitter:emitter_for_multiline.0] error registering chunk with tag:
Bug Report
Describe the bug
Hello
Multiline filter is crashing on pods that generate a large amount of logs after reaching Emitter_Mem_Buf_Limit . On pods with a normal/low number of logs it works without problems
To Reproduce
This is my configuration (i left only the relevant parts):
Your Environment
Additional context
Fluentbit container keeps crashing after it gets to the memory limit configured for that container. Also a lot of logs like
[error] [input:emitter:emitter_for_multiline.0] error registering chunk with tag:
are flooding fluentbit logs