Closed andrwng closed 9 months ago
Reproduction steps:
cloud_storage_segment_max_upload_interval_sec
to make partial uploads happen quicklyWARN 2023-07-25 15:19:23,739 [shard 1] cloud_storage - [fiber3~3~10~0|1|59694ms] - remote.cc:558 - Downloading segment from {dice-bucket}, {key_not_found}, segment at {"7831aae9/kafka/rrr-test/0_17/4-5-263-1-v1.log.1"} not available
WARN 2023-07-25 15:19:23,739 [shard 1] cloud_storage - [fiber11 kafka/rrr-test/0] - remote_partition.cc:354 - exception thrown while reading from remote_partition: NotFound
WARN 2023-07-25 15:19:23,740 [shard 0] kafka - connection_context.cc:451 - Error processing request: cloud_storage::download_exception (NotFound)
This issue hasn't seen activity in 3 months. If you want to keep it open, post a comment or remove the stale
label – otherwise this will be closed in two weeks.
This issue was closed due to lack of activity. Feel free to reopen if it's still relevant.
Version & Environment
Redpanda version: (use
rpk version
): v23.1.13What went wrong?
Read replicas see
key_not_found
after source cluster does an adjacent segment merge. The v23.1.xsync_manifest()
implementation:This means that when segments are replaced entirely, and removed, the read replica doesn't get the update that those segments have been removed.
What should have happened instead?
The read replica should look for segments that no longer have identical ranges, and replace them in their manifests as needed.
How to reproduce the issue?