Open nicolae-gorgias opened 11 months ago
@nicolae-gorgias I don't believe we have specific memory settings, but one thing I would suggest is updating the version - we've just released 1.0.8 and (especially compared to 0.0.14) there have been a lot of dependencies updated since then 😄
Echoing what @Paultagoras said: the last version uses the latest JDBC driver version - v0.5.0, which had been refactored to reduce the memory footprint. @nicolae-gorgias would you mind upgrading and sharing your feedback? If you see the problem still exists, we would happily investigate.
@Paultagoras @mshustov thanks, I'll do it and come back with feedback!
@mshustov @Paultagoras Interesting, the vertical line is the upgrade moment, it changed the memory pool metric but total memory is still the same. What other metrics can I check to understand which components consume it?
Hmm interesting - I'll have to take another look. We don't control/configure memory usage to my knowledge, that's all handled by Kafka Connect, but maybe there's some flag or something that we need to provide somewhere.
Is total memory still hitting 100%?
Describe the bug
Hey team, we are using ClickHouse, JDBC and BigQuery Kafka Connects with identical settings and resources:
KAFKA_HEAP_OPTS : -XX:+UseContainerSupport -XX:InitialRAMPercentage=60 -XX:MaxRAMPercentage=80
I found that only ClickHouse has a weird memory usage reported on
max:kubernetes.memory.usage_pct
metric by growing until 100%, you can see it here, red is CH KC (90%-100%), blue is BQ KC (80%)Memory pool also looks unstable compared to BigQuery KC (violet - CH, blue - BQ):
This command returns same settings on both containers:
Expected behaviour
ClickHouse Kafka Connect should respect
MaxRAMPercentage
and stop growing memory usage up to 80%. Do we override some memory settings or have custom ones in ClickHouse Kafka Connect to explain this behavior?Configuration
Environment