Open kefiras opened 2 weeks ago
In the above configuration you buffer messages until 30 seconds until they are written into a single chunk. The default for chunk_limit_size
is 8MB even for a memory buffer, 256MB for file buffer. In logging operator version 4.6.0 the setting for file buffers was also set to 8MB, but that was removed in 4.7.0 with https://github.com/kube-logging/logging-operator/pull/1729 which will use fluentd's default 256MB.
I would recommend setting chunk_limit_size
to something bigger than 8MB or use a smaller time window by reducing the timekey
value.
Thanks for reply. The thing is that Azure Event Hub maximum message size is 1MB. Looks like the only thing that can be done is to set relatively small time window.
Describe the bug: I am seeing a lot of 'chunk bytes limit exceeds for an emitted event stream' even when the chunk_limit_size is set. I suspect this may happen in certain circumstances but I don't understand when and why.
Expected behaviour: message size shouldn't be more than chunk_limit_size set
Steps to reproduce the bug: Run logging operator with Kafka output.
Additional context: N/A
Environment details:
/kind bug