Open ninsmiracle opened 3 months ago
But we can use duplicate_log_batch_bytes = 0
to deal with this problem.
So I'm not very sure should I fix this 'bug'.
If I should fix it, executing a dup request should write multiple requests and one decree as a write_batch?
@acelyc111 @empiredan
Bug Report
At present, the implementation of dup is that when the backup-cluster executes the dup rpc processing function, multiple requests in dup are written to rocksdb in multiple times. Each time it is written to rocksdb, the decree of the dup mutation is written at the same time. If the backup-cluster is checkpointed at this time, the data of the decree may not be completely written to rocksdb. If the learner of the backup-cluster uses this checkpoint to start learning, it will start to request plog from decree+1 after learning. As a result, some dup requests of the decree are not learned, and some data is lost.