Open cedricve opened 4 years ago
try setting a low value for queued.max.messages.kbytes
. The default has recently been reduced (to 65536), but IIRC it's still very high in the current release version of the go client.
@mhowlett is this a recommended setting @mhowlett ? or why is it set. Thank you so much for your advice.
The default settings have traditionally been optimized for very high throughput. the new value, 65Mb is still a lot of caching though and I don't think it would impact max-throughput much. you can reduce it even further if you care a lot about memory.
@mhowlett thanks for explaining I've been reading a bit, but still not clear what this setting is actually doing. I understand it's the cache size, but what will it do? When will it allocate memory and release memory.
librdkafka aggressively pre-fetches messages from brokers and caches them. this is the maximum size of that cache per topic. memory is allocated as needed and won't be reduced when no longer needed.
Great thanks @mhowlett, so reducing the setting will make sure less messages can be cached.
memory is allocated as needed and won't be reduced when no longer needed.
Looking at the Kubernetes charts, I see incremental memory usage. So reducing the kbytes setting will make sure the memory increases less quickly but still not sure (in my mind at least) why this settings would stop this incremental increase.
Sorry for bothering with all these questions, I should pay you a consulting fee :P
Just found it some of the logs are increasing because of sending a lot of big objects. Not sure when this should be removed, played a bit with the log settings of this Helm chart. https://github.com/bitnami/charts/tree/master/bitnami/kafka
I have a similar memory consumption behavior like @cedricve with version 1.9.1. My service simply consumes one topic and writes messages to two other topics. Even while the conumer is blocked by GetMessage(-1) and waiting for messages to come, the service memory consumption increases continuously.
Any advice on how to handle this? I expect the memory consumption to be stable when no messages are fetched by the consumer.
Description
I have about 10 consumers continuously reading from different topics. What I see is that memory is increasing significantly, until it blocks all my consumers. All I do is reading from a topic and sending it to another topic. As suggested I'm reading from the delivery chan.
I also noticed that after a while the memory gets a some extend "stable", however at some point it will freeze all the consumers, and things is read by any consumers.
How to reproduce
I create a consumer and producer
And then I have a reading function. What this function is doing is reading from a topic, and moving it to the next topic.
Checklist
Please provide the following information: