Open jgordijn opened 11 months ago
Thanks @jgordijn, that is a really interesting finding! Indeed, shutdown is something that we need to improve. I want to look at this beginning next year.
For now you could apply the workaround of setting a shorter maxRebalanceDuration
, e.g. 15 seconds.
If I set maxRebalanceDuration
, won't I fallback to old behaviour and get duplicates.
Also, please look at the rebalance time. It is not only the shutdown that seems to fail.
If I set maxRebalanceDuration, won't I fallback to old behaviour and get duplicates.
It depends. In the old behavior the program gets no chance at all to do commits. If you set a maxRebalanceTime, it at least gets some chance. Most programs commit everything within a few seconds. With slow processing like here, it will be necessary to reduce the amount of records that are pre-fetched (withMaxPollRecords(10)
is a good start, but I recommend you also disable prefetching with withoutPartitionPreFetching
) to be done processing and committing before the deadline.
Also, please look at the rebalance time. It is not only the shutdown that seems to fail.
Can you elaborate on that please? What error do you see?
I start consumer1. Then I start consumer 2, which doesn't start immediately (or after a short delay). Meanwhile the new consumer doesn't show anything in the log, and in consumer1 I see:
09:11:35.370 zio-kafka-runloop-thread-0 INFO logger - [Consumer clientId=consumer-tester1-1, groupId=tester1] Request joining group due to: group is already rebalancing
It takes nearly 3 minutes (differs per run), before consumer2 starts consuming.
Did you add withoutPartitionPreFetching
to the consumer settings already?
A yes, flag (withoutPartitionPreFetching
) seems to work. Why is this flag there? The Kafka Client also has prefetching, right?
I'm a bit worried about the amount of flags and combination I need to use to get it to work.
The kafka-client does not do any pre-fetching. By default zio-kafka does quite a bit of pre-fetching.
I'm a bit worried about the amount of flags and combination I need to use to get it to work.
I know... Kafka has a huge amount of knobs that can be turned. Its a pain to support people with it because there is always one more setting that can be tweaked.
I am really happy that this solved the issue though! We will need to add this gotcha to the documentation.
I read about prefetching here: https://www.conduktor.io/kafka/kafka-consumer-important-settings-poll-and-internal-threads-behavior/#Kafka-Consumer-Poll-Behavior-0 . Is that incorrect?
I can only speculate. Conduktor uses zio-kafka, so most probably they are describing how their product works, not the underlying java client.
Why did you close this issue? The issue with shutdown is not resolved.
@jgordijn Yep, you are right. Thanks for correcting me.
It seems that on shutdown the stream is stopped, but the rebalanceListener is waiting until the last message is committed.
@erikvanoosten You think this would be solved by #1201 ?
It seems that on shutdown the stream is stopped, but the rebalanceListener is waiting until the last message is committed.
@erikvanoosten You think this would be solved by #1201 ?
Yes, that would be my expectation 😄
Sounds related and perhaps fully fixed by #1358.
Shutdown of one of the members of a consumer group will result in a rebalancing. Is that the situation here as well, or was shutdown of a member of a single-instance consmer group resulting in long times shutdown times?
During rebalancing sometimes partitions are assigned and then removed without any records having been fetched for those partitions. Partition streams are created and emitted in the top-level stream but may not have pulled from by the downstream stages (in user code). In that case, the safe rebalance mechanism would wait for an end signal for those partition streams that never arrived and timeout only after the maxRebalanceDuration
.
We will be adding some logging for this situation.
I tried out the new
withRebalanceSafeCommits
feature and it has unexpected behavior:Maybe it has something to do with the slow (100ms) processing per message, but having a 10 message poll should mitigate this. I would thus expect rebalancing to happen in (worst case) 1 sec (10x100ms).