tikv / raft-engine

A persistent storage engine for Multi-Raft log
Apache License 2.0
565 stars 88 forks source link

code refactor about purge and rewrite #64

Closed hicqu closed 4 years ago

hicqu commented 4 years ago

Signed-off-by: qupeng qupeng@pingcap.com

Little-Wallace commented 4 years ago

Could we split the compression from LogBatch? And we can encode the data when put any new item. Just like this:

struct LogBatch{
    items: Vec<LogItem>,
    data: Vec<u8>,
}

impl LogBatch {
      fn put(&mut self, entries: Vec<Entry>) {
           let item = LogItem::Entries(entries);
           item.encode_to(&mut self.data);
           self.items.push(item);
      }
     fn data(&self) -> &[u8] {
          &self.data
     }
}

impl PipeLog {
         fn write<E: Message, W: EntryExt<E>>(
        &self,
        data: &[u8],
        mut sync: bool,
    ) -> Result<(usize,usize, Option<CacheTracker)> {
         if data.len() > self.compression_threshold {
            let compress_data = lz4::encode_block(data, HEADER_LEN);
            self.append(LogQueue::Append, &compress_data, &mut sync)?
         }  else {
            self.append(LogQueue::Append, &data, &mut sync)?
         }
        .....
    }
}
Little-Wallace commented 4 years ago

And we do not need to store compression type in EntryIndex. Because we must read the data from file at first, we can judge whether this data has been compressed by HEADER.