Closed hicqu closed 4 years ago
Could we split the compression from LogBatch
? And we can encode the data when put any new item. Just like this:
struct LogBatch{
items: Vec<LogItem>,
data: Vec<u8>,
}
impl LogBatch {
fn put(&mut self, entries: Vec<Entry>) {
let item = LogItem::Entries(entries);
item.encode_to(&mut self.data);
self.items.push(item);
}
fn data(&self) -> &[u8] {
&self.data
}
}
impl PipeLog {
fn write<E: Message, W: EntryExt<E>>(
&self,
data: &[u8],
mut sync: bool,
) -> Result<(usize,usize, Option<CacheTracker)> {
if data.len() > self.compression_threshold {
let compress_data = lz4::encode_block(data, HEADER_LEN);
self.append(LogQueue::Append, &compress_data, &mut sync)?
} else {
self.append(LogQueue::Append, &data, &mut sync)?
}
.....
}
}
And we do not need to store compression type in EntryIndex
. Because we must read the data from file at first, we can judge whether this data has been compressed by HEADER
.
Signed-off-by: qupeng qupeng@pingcap.com