Testing canary 0.2.0 with a 6 broker kafka cluster deployed to kubernetes with nodes across three AZs, we noticed the canary topic partitions were not respecting the kafka rack awareness feature.
I0610 13:03:45.234567 1 topic.go:162] The canary topic __redhat_strimzi_canary was created
I0610 13:03:45.234593 1 consumer.go:135] Waiting consumer group to be up and running
Testing canary 0.2.0 with a 6 broker kafka cluster deployed to kubernetes with nodes across three AZs, we noticed the canary topic partitions were not respecting the kafka rack awareness feature.
The partition assignment looked like this:
For instance partition 2 is on 2,3,4 but brokers 2 and 3 are both in AZ
1b
.oc logs kafka-instance-kafka-2 | grep -E ^broker.rack broker.rack=us-east-1b
oc logs kafka-instance-kafka-3 | grep -E ^broker.rack broker.rack=us-east-1b
The canary topic was created by the canary:
The brokers seem to have been up before that.
so I don't think this is to do with the canary coming up before some of the brokers.