Previously, we failed the entire KafkaConsumer if storing
a message offset through RDKafkaClient.storeMessageOffset
failed because the partition the offset should be committed to
was unassigned (which can happen during rebalance).
We should not fail the consumer when committing during
rebalance.
The worst thing that could happen here is that storing the offset
fails and we re-read a message, which is fine since KafkaConsumers with
automatic commits are designed for at-least-once processing:
Motivation:
Previously, we failed the entire
KafkaConsumer
if storing a message offset throughRDKafkaClient.storeMessageOffset
failed because the partition the offset should be committed to was unassigned (which can happen during rebalance).We should not fail the consumer when committing during rebalance.
The worst thing that could happen here is that storing the offset fails and we re-read a message, which is fine since KafkaConsumers with automatic commits are designed for at-least-once processing:
https://docs.confluent.io/platform/current/clients/consumer.html#offset-management
Modifications:
RDKafkaClient.storeMessageOffset
: don't throw when receiving errorRD_KAFKA_RESP_ERR__STATE