Open soenkeliebau opened 1 year ago
Good catch.
Technical question: Let's say a customer starts with 1 node, we set the setting to 1
as well. Customer later scales up to 5 nodes, we set it to 3. What will happen with existing topics? They will keep the original 1
setting, correct?
This should change the replication factor to three at that point. I'd have to test it to be 100% sure, but am pretty sure that this setting is actively monitored by Kafka, since it only applies to one topic specifically.
Just to make sure I don't create confusion, this setting only applies to the __consumer_offsets topic, this is not the default replication factor setting that applies to all new topics that are created without specifying a replication factor.
This at least needs some documentation but I'd prefer an automatic setting change I believe
When deploying Kafka clusters with less than three nodes the default value of 3 for offsets.topic.replication.factor prohibits users from reading any values from topics.
Writing works fine, if topics are created with few enough partitions - or auto-created which observes available broker count for the replication factor. But when reading for the first time, Kafka internally tries to create the
__consumer_offsets
topic, and this is required to have three partitions by default. And until this has been created no read requests are allowed.The broker simply keeps logging
Possible solutions are: