bsm / sarama-cluster

Cluster extensions for Sarama, the Go client library for Apache Kafka 0.9 [DEPRECATED]
MIT License
1.01k stars 222 forks source link

Consumer offset resetting after partition leader change #270

Closed muirdm closed 6 years ago

muirdm commented 6 years ago

We saw this issue in production when restarting a kafka node. I was able to reproduce it using a test cluster for a period, but cannot reproduce any longer, frustratingly. Production was running kafka 1.0, and I reproduced it on 2.0 in our test cluster (after I had given up trying to reproduce it on 1.0 and moved on to different testing).

While I could reproduce it, I added some debug messages and found that the consumer offset reset was happening when sarama-cluster fetched the next consumer partition offset as, say, 100, but in ConsumePartition sarama fetched the newest offset for that partition as 99. chooseStartingOffset in sarama returns ErrOffsetOutOfRange in that case, and sarama-cluster falls back to the default offset (which in our case is oldest) when it sees ErrOffsetOutOfRange in newPartitionConsumer.

Based on my understanding, with offsets.commit.required.acks=-1 and min.insync.replicas=2 (replication_factor=3), the consumer offsets should be consistent in kafka (i.e. it is not possible for sarama-cluster to read offset=100 from one node, and sarama to read offset=99 from another). Does that sound right to you?

And since the sarama-cluster read of offset=100 happens first, I don't think the cause is someone else consuming the partition and incrementing the offset between fetches of the offset (since that would result in offset reads of 100,101, not 100,99).

One other thought I had was that we were producing at Producer.RequiredAcks = WaitForLocal, not WaitForAll. Could that explain what I am seeing? For example we produce a message that doesn't replicate immediately but gets consumed immediately. The consumer offset becomes 100 on the leader, but a replica still has a newest offset of 99.

I don't think there is necessarily a bug in sarama-cluster, but wanted to ask here first since maybe someone has an idea what is going on.

dim commented 6 years ago

Unfortunately, and despite my best efforts, I still have gaps in my understanding of certain Kafka behaviours and this seems to be one of them. The only thing you can do is enable debug logging on the brokers node and in sarama and trace it all the way down. BTW, sarama-cluster is now deprecated, since https://github.com/Shopify/sarama/pull/1099 has been released