nlohmann / json

JSON for Modern C++
https://json.nlohmann.me
MIT License
42.01k stars 6.63k forks source link

multithreading use from_msgpack leading very slow #4016

Closed jiapengwen closed 1 year ago

jiapengwen commented 1 year ago

Description

  1. I have many to_msgpack binary file, like 1-1000;

  2. I use multithreading read these binary file through from_msgpack;

    
    void ff(int i) {
    std::ifstream infile(std::to_string(i), std::ios_base::in | std::ios_base::binary);
    std::istreambuf_iterator<char> iter(infile);
    std::vector<uint8_t> v_msgpack{};
    std::copy(iter, std::istreambuf_iterator<char>(), std::back_inserter(v_msgpack));
    auto start = std::chrono::system_clock::now();  
     j = json::from_msgpack(v_msgpack);
    auto end = std::chrono::system_clock::now();
    std::chrono::duration<double> elapsed = end - start;
    printf("%d load cost= %f \n", i, elapsed.count());
    }

int main() { { // change thread_num get different load cost time int thread_num = 20; vector th_pool; for (int i = 0; i < thread_num; ++i) {

  th_pool.emplace_back(ff, i);
}
for (int i = 0; i < thread_num; ++i) {
  th_pool[i].join();
}

} return 0; }

3. when thread_num =2
``` text
0 load cost=7.471617
1 load cost = 7.226486
  1. when thread_num = 20
    0 load cost=40.xx
    1 load cost = 40.xxx
    .
    .
    .
    x load cost 53.xx

Reproduction steps

see description

Expected vs. actual results

I think when use multithreading, the load cost time shoule almost the same. it looks like multithreading competition

Minimal code example

No response

Error messages

No response

Compiler and operating system

Linux DESKTOP-E1TD784 5.10.16.3-microsoft-standard-WSL2 #1 SMP Fri Apr 2 22:23:49 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

Library version

3.9.1

Validation

nlohmann commented 1 year ago

The library does not use any static memory that would hinder multithreading - maybe the operating system has limits on opening files? Can you share the files to have a reproducible setup?

jiapengwen commented 1 year ago

my test file is large,average size is 300M,so i can not share,but you can construct array with 100M items,I think is because allocate memory too many times ,