Open kphelps opened 3 weeks ago
@kphelps In case partition racks have changed the rack aware partitioner is run again.
Before the log effective subscription list changed from 1 to 1 topic(s):
There's this code:
/* Compare to existing to see if anything changed. */
if (!rd_list_cmp(rkcg->rkcg_subscribed_topics, tinfos,
rd_kafka_topic_info_cmp)) {
/* No change */
rd_list_destroy(tinfos);
return rd_false;
}
The topic is exactly the same if it
In case of a rolling restart a broker is not reported by the metadata request and that can change the list of reported racks. Seems like when a broker is missing we could avoid considering the partition racks changed. Have to check what the Java client is doing.
Description
When using cooperative sticky with fetch from follower, metadata changes during a rolling restart are triggering frequent rebalances. This seems to be related to KIP-881's behavior to rebalance when the set of racks changes. However, no reassignment is being performed and I'm only restarting the cluster.
How to reproduce
Setup a consumer group that is using fetch-from-follower (ie, set
client.rack
) and cooperative sticky assignment. Rolling restart the cluster. The consumer group will trigger rebalances when the metadata changes.It may be reproducible with other assignment strategies, but I have not tested that yet.
Checklist
IMPORTANT: We will close issues where the checklist has not been completed.
Please provide the following information:
2.3.0
and2.4.0
both tested3.7.0
client.rack
andpartition.assignment.strategy = cooperative-sticky
ubuntu 20.04
debug=..
as necessary) from librdkafka