cockroachdb / pebble

RocksDB/LevelDB inspired key-value database in Go
BSD 3-Clause "New" or "Revised" License
4.66k stars 430 forks source link

db: pipeline WAL rotation #2540

Open jbowens opened 1 year ago

jbowens commented 1 year ago

Let L be the fsync latency of the WAL storage medium.

When the memtable and WAL are rotated, the first batch application to the new WAL may need to at worst wait:

  1. For an inflight fsync of entries to the previous WAL to complete (at worst, L).
  2. For a final fsync of entries to the previous WAL that did not make the in-flight fsync. ( L )
  3. A final fsync in LogWriter.Close to ensure the EOF trailer is synced. ( L )
  4. A fsync of the WAL directory to ensure the new WAL is durably linked into its new name. ( L )
  5. The fsync of this new batch. ( L )

Cumulatively, these can cause commit tail latencies to increase 5x. There are a few ways this could be reduced.

(2) & (3) could be together bounded by 1 L through more coordination between LogWriter.Close and the LogWriter's flush loop. The final flush of log entries (2) can include the EOF trailer and sync:

https://github.com/cockroachdb/pebble/blob/f6eaf9a696e6344af4660b2ac7e30e70539ac2f5/record/log_writer.go#L638-L645

(4) & (5) could happen in parallel, but it would require some additional, delicate synchronization.

Or alternatively we could prepare the next WAL ahead of time. In a steady state, Pebble would have two open WALs with log numbers >= minUnflushedLogNum: current and next. The next LogWriter's flushLoop would synchronize with current's Close, refusing to signal to waiting syncQueuers until current's Close has completed. By addressing (2) & (3) as well, this would eliminate any additional worst-case fsync latency from the WAL rotation itself, making it inline with ordinary WAL fsyncs.

In Open, we would need to relax/rework the strictWALTail option. Currently all replayed WALs besides the most recent one are required to have clean tails indicating that they were deliberately closed—anything else is interpreted as corruption. With this change, it would be possible for the second most recent WAL to have an unclean tail for some time. We could include a marker entry in the next WAL that is written only once after the next WAL observed that current's Close completed, indicating that if recovery observed an unclean tail of the previous WAL, it should treat it as corruption.

Jira issue: PEBBLE-192

jbowens commented 1 year ago

In #2762 we've unbounded the amount of data that may be queued for flushing within a single WAL. Today, the 1:1 relationship between WALs and memtables mean that the amount of data queued for flushing is bounded by the size of the mutable memtable. If we begin pipelining WALs allowing more than one WAL to queue writes, this bound will effectively be lifted to opts.MemTableStopWritesThreshold * opts.MemtableSize. If/when we make this change, we should reevaluate what if any additional bound we want to impose on blocks queued for flushing.