Closed mschuwalow closed 1 month ago
Hi, yes please that sounds interesting. I've never encountered something like that before.
So if I understand correctly it would limit the number of records that get processed more than once to max 1 per partition key? It could be useful for zio-kafka as well.
So if I understand correctly it would limit the number of records that get processed more than once to max 1 per partition key?
Yeah, exactly.
Cool, I'll open a pr sometime this or next week 👍 We can see what we want to do about zio-kafka afterwards. I agree that I could be useful (and interesting to port :))
Are you interested in having alternative checkpointing / consuming behaviour?
One thing we are using in one of our services behaves the following way:
The idea behind this was to have a compromise between efficiency (due to big batches of records getting checkpointed) and ease of implementation (as only a single record will ever be retried do to consumer failures. This saves you from having to code with the assumption of 'arbitrary prefixes' of records getting reprocessed).
If this is something that is interesting to have in this project as well, I can upstream it.