Closed vivekgh24 closed 4 months ago
@jnmoyne @gcolliso @sergiosennder @scottf @bruth Can you please help us on this.
The messages in the batch are acknowledged when commit is invoked for the batch, so if the app is killed before that happens then those un-acknowledged messages will be re-delivered, I do not believe this behavior should be changed as it's the whole point of the commit (to ack the messages only when commit is invoked).
@jnmoyne Thanks for the response. Problem was spark structured streaming app which was consuming these messages fron NATS.io was saving the messages to output sink . But looks like before they were acknowledged spark app was shutdown and when we restarted the spark app those un-acknowledged messaged ( even though they were saved in the output sink) got reprocessed. Ok so that means we need to handle those duplicates in the consuming application . In our case it's spark structured streaming app. Closing the ticket.
What version were you using?
nats-server: v2.0.0 Jetstream Version: 2.10.4
What environment was the server running in?
Local - On Eclipse IDE running in my system
Is this defect reproducible?
From NATS CLI, run bellow command to push 10k messages for testing
nats pub newsub --count=10000 "test #{{Count}}"
While spark application is processing the messages, stop the spark application.
After sometime restart the spark application.
Given the capability you are leveraging, describe your expectation?
Since 10k messages were pushed to NATS Jetstream as input, after spark application processed all the messages ( after in-between stop and restarting the spark application) the number of processed messages in the output folder "tmp/outputdelta" should be exactly 10k . That is no. of input messages should have been same as no. of output/processed messages.
Given the expectation, what is the defect you are observing?
No. of output/processed messages in the output folder is always greater than the no. of input messages . In above scenario output messages = 10100 where as only 10000 messages were pushed to NATS Jetstream as input. 100 messages were duplicated !
Observations : As I can see in the tmp\outputdelta_delta_log folder , the last file which got generated before stopping the application contains below :
And the first file which got generated right after restarting the spark application contains below :
As you can see, before application was stopped, it had processed 3100 messages. But after spark application was restarted , it again processed from 3000 instead of 3100. That's where we see 100 messages were duplicated in the output folder. Also I'm using Durable consumer which resends the messages if acknowledgment wasn't done after consumption. So looks like for the last microbatch which was processed just before app shutdown ( 3000 to 3100 ) acknowledgment wasn't done so it resent those messages again which got duplicated How to fix this issue ?