bincode-org / bincode

A binary encoder / decoder implementation in Rust.
MIT License
2.69k stars 272 forks source link

Decode memory usage #605

Closed mrsanor closed 1 year ago

mrsanor commented 1 year ago

I am trying to decode a file that was written using bincode. The file is 22,8MB but when I'm decoding it, the memory usage jumps to 7.5GB and it takes about 15 seconds. When it's loaded to memory the memory usage doesn't drop and the struct I'm trying to load shouldn't take that much space.

I also added decode limit to 54MB (53MB would throw LimitExceeded error), which I guess is the amount of memory that the variable takes when loaded to memory, though it seems to be around 76MB when I calculated it programmatically. I am using the same config when encoding and decoding and I have tried generating the file multiple times with different data (but same struct). Once it is loaded it seems to have all the correct data though.

The struct I am encoding/decoding looks like this

use serde::{Deserialize, Serialize};
use dashmap::DashMap;

// This is the sruct that I am trying to encode/decode into file
#[derive(Serialize, Deserialize)]
struct MyStruct {
  map1: DashMap<MyKey, DashMap<MyEnum, i32>>,
  map2: DashMap<MyKey, DashMap<MyEnum, i32>>,
  count: usize,
}

// This struct is only used as a key for the DashMaps in `MyStruct`
#[derive(Serialize, Deserialize, Hash, Eq, PartialEq)]
struct MyKey {
    key1: u8,
    key2: u16,
    key3: Vec<MyEnum>,
}

#[derive(Serialize, Deserialize, Hash, Eq, PartialEq)]
enum MyEnum { Val1, Val2,  Val3(i32) }

And this is how I encode/decode (I have tried both v1.3.3 and v2.0.0.rc-1 and different writers and readers):

impl MyStruct {
  fn encode(&self) {
    let mut file = OpenOptions::new()
          .create(true)
          .write(true)
          .truncate(true)
          .open("file_name.bin")
          .unwrap();

    bincode::serde::encode_into_std_write(self, &mut file, bincode::config::standard()).unwrap();
  }

  fn decode() -> Self {
    let mut file = File::open("file_name.bin")
    // I added the limit here to test when it throws error (53_000_000 would throw error)
    let cfg: Configuration<LittleEndian, Varint, WriteFixedArrayLength, Limit<54_000_000>> = bincode::config::standard().with_limit();
    bincode::serde::decode_from_std_read(&mut file, cfg).unwrap()
  }
}

So am I doing something wrong or what is going on here?

VictorKoenders commented 1 year ago

2 things come to mind:

mrsanor commented 1 year ago

The .capacity() and .len() didn't quite match (capa: 1835008 vs len: 928958 and capa 229376 vs len: 216774) but that shouldn't cause that big of a problem. Also I got similar results with HashMap

I replaced DashMap with HashMap and now it takes barely any memory, so things work as they should. Under the hood I believe DashMap uses HashMaps that are sharded to provide better concurrency support.

I have tried to replicate the bug with much simpler setups, but haven't been able to do it yet.

mrsanor commented 1 year ago

Okay here is the simplest test I was able to make up. The problem seems to be with nested DashMaps . As you can see the capacity of nested maps is quite a lot larger, and according to my estimation the size is 2.3GB dash vs 43MB hash, though my calculation is probably incorrect since the program actually takes more memory.

use std::collections::HashMap;
use std::mem::size_of_val;
use std::thread;
use std::time::Duration;
use dashmap::DashMap;

fn main() {
    // This will use about 3.2 GiB memory
    println!("=== DashMap ===");
    test_dash();

    // This will use 108 MiB memory
    println!("\n=== HashMap ===");
    test_hash();
}

fn test_dash() {
    let map: DashMap<i32, DashMap<i32, i32>> = DashMap::new();
    for i in 0..300_000 {
        let map2 = DashMap::new();
        map2.insert(5, 1000);
        map.insert(i, map2);
    }

    println!("Original map len: {}", map.len());
    println!("Original map capacity: {}", map.capacity());
    println!("Original map nested capacity: {}", map.get(&0).unwrap().capacity());
    println!("Original map entry bytes: {}", size_of_val(&map.entry(0)));
    println!("Original map nested entry bytes: {}", size_of_val(&map.get(&0).unwrap().entry(0)));
    println!("Original map bytes: {}, megabytes: {}", dash_size(&map), dash_size(&map) / 1_000_000);

    let cfg = bincode::config::standard();
    let encoded_map = bincode::serde::encode_to_vec(&map, cfg).unwrap();
    println!("Encoded map len: {}", encoded_map.len());

    let tuple: (DashMap<i32, DashMap<i32, i32>>, usize) = bincode::serde::decode_from_slice(&encoded_map, cfg).unwrap();
    let decoded_map = tuple.0;
    println!("Decoded map bytes read: {}", tuple.1);
    println!("Decoded map len: {}", decoded_map.len());
    println!("Decoded nested map len: {}", decoded_map.get(&0).unwrap().len());
    println!("Decoded map capacity: {}", decoded_map.capacity());
    println!("Decoded nested map capacity: {}", decoded_map.get(&0).unwrap().capacity());
    println!("Decoded map entry bytes: {}", size_of_val(&map.entry(0)));
    println!("Decoded map nested entry bytes: {}", size_of_val(&map.get(&0).unwrap().entry(0)));
    println!("Decoded map bytes: {}, megabytes: {}", dash_size(&decoded_map), dash_size(&decoded_map) / 1_000_000);

    thread::sleep(Duration::new(4, 0));
}

fn test_hash() {
    let mut map: HashMap<i32, HashMap<i32, i32>> = HashMap::new();
    for i in 0..300_000 {
        let mut map2 = HashMap::new();
        map2.insert(5, 1000);
        map.insert(i, map2);
    }

    println!("Original map len: {}", map.len());
    println!("Original map capacity: {}", map.capacity());
    println!("Original map nested capacity: {}", map.get(&0).unwrap().capacity());
    println!("Original map entry bytes: {}", size_of_val(&map.entry(0)));
    println!("Original map nested entry bytes: {}", size_of_val(&map.get_mut(&0).unwrap().entry(0)));
    println!("Original map bytes: {}, megabytes: {}", hash_size(&mut map), hash_size(&mut map) / 1_000_000);

    let cfg = bincode::config::standard();
    let encoded_map = bincode::serde::encode_to_vec(&map, cfg).unwrap();
    println!("Encoded map len: {}", encoded_map.len());

    let tuple: (HashMap<i32, HashMap<i32, i32>>, usize) = bincode::serde::decode_from_slice(&encoded_map, cfg).unwrap();
    let mut decoded_map = tuple.0;
    println!("Decoded map bytes read: {}", tuple.1);
    println!("Decoded map len: {}", decoded_map.len());
    println!("Decoded nested map len: {}", decoded_map.get(&0).unwrap().len());
    println!("Decoded map capacity: {}", decoded_map.capacity());
    println!("Decoded nested map capacity: {}", decoded_map.get(&0).unwrap().capacity());
    println!("Decoded map entry bytes: {}", size_of_val(&decoded_map.entry(0)));
    println!("Decoded map nested entry bytes: {}", size_of_val(&decoded_map.get_mut(&0).unwrap().entry(0)));
    println!("Decoded map bytes: {}, megabytes: {}", hash_size(&mut decoded_map), hash_size(&mut decoded_map) / 1_000_000);

    thread::sleep(Duration::new(4, 0));
}

fn dash_size(map: &DashMap<i32, DashMap<i32, i32>>) -> usize {
    let mut total_size_bytes = map.capacity() * size_of_val(&map.entry(0));
    for entry in map {
        total_size_bytes += entry.value().capacity() * size_of_val(&entry.value().entry(0));
    }
    total_size_bytes
}

fn hash_size(map: &mut HashMap<i32, HashMap<i32, i32>>) -> usize {
    let mut total_size_bytes = map.capacity() * size_of_val(&map.entry(0));
    for (_, val) in map {
        total_size_bytes += val.capacity() * size_of_val(&val.entry(0));
    }
    total_size_bytes
}

The output is

=== DashMap ===
Original map len: 300000
Original map capacity: 458752
Original map nested capacity: 3
Original map entry bytes: 40
Original map nested entry bytes: 40
Original map bytes: 54350080, megabytes: 54
Encoded map len: 2934217
Decoded map bytes read: 2934217
Decoded map len: 300000
Decoded nested map len: 1
Decoded map capacity: 458752
Decoded nested map capacity: 192
Decoded map entry bytes: 40
Decoded map nested entry bytes: 40
Decoded map bytes: 2322350080, megabytes: 2322

=== HashMap ===
Original map len: 300000
Original map capacity: 458752
Original map nested capacity: 3
Original map entry bytes: 32
Original map nested entry bytes: 32
Original map bytes: 43480064, megabytes: 43
Encoded map len: 2934217
Decoded map bytes read: 2934217
Decoded map len: 300000
Decoded nested map len: 1
Decoded map capacity: 458752
Decoded nested map capacity: 3
Decoded map entry bytes: 32
Decoded map nested entry bytes: 32
Decoded map bytes: 43480064, megabytes: 43
VictorKoenders commented 1 year ago

I haven't dug too much into this, but I suspect this is an issue in serde. What I'd expect to happen is that serde notifies dashmap of how many items to pre-allocate, and dashmap to respect this. This is what we're doing with HashMap here.

if this is not happening correctly then dashmap will follow some internal re-allocation system as items are added to an internal map which exceed the capacity. This requires allocating new memory, copying items over, and freeing up the previous dashmap.

I'm not sure if we can fix this in bincode unfortunately (but feel free to try to prove my assumption wrong!)

What we can do is implement Encode/Decode in dashmap, and implement the desired behavior there. More information here

Edit: Does this issue also happen with bincode 1? I wonder if it's a regression

mrsanor commented 1 year ago

Yeah it happens with bincode 1.3.3 also so it's not regression.

mrsanor commented 1 year ago

What we can do is implement Encode/Decode in dashmap, and implement the desired behavior there. More information here

Did you mean I should make a pull request to dashmap with the correct implementation of Encode and Decode? I'm not really sure how that implementation should look like though.

mrsanor commented 1 year ago

Okay so the problem is that DashMap initialized the nested map with 64 shards and capacity of 3 each ==> total capacity of 192. So I need to think of a way to initialize them in a way that is better suited for this use case and I'll perhaps make a pull request later to DashMap. Thank you @VictorKoenders for the help and for the nice library you have here!