Closed yszhou2019 closed 1 year ago
@ilyachur One TBB fixing was updated on https://github.com/oneapi-src/oneTBB/commits/tbb_2020 @yszhou2019 You can try to build TBB binaries and replace them inside the OpenVINO folder.
Thank you! @peterchen-intel I will try it later.
@peterchen-intel I just compiled the tbb_2020 branch you mentioned and I suppose there is still a memory leak.
I wonder how this memory problem can be fixed in later verison.
@yszhou2019 we did find there is mem leak in tbb, but the different calltrace with you had provided before.
Per my understand, the above calltrace will not be called multiple times when loop call inference(), so it should not bring too much memory leak. Anyhow we will continue debugging this tbb memory leak to try to find a solution to solve it.
To skip tbb mem leak issue, if possible you can rebuild openvino with option disable tbb (-DTHREADING=SEQ), and run the same test sample to check whether the memory consumption still continuously increasing.
@riverlijunjie Thank you for your reply! I will try it and reply later.
Closing this, I hope previous responses were sufficient to help you proceed. Feel free to reopen and ask any questions related to this topic.
System information
Detailed description
I often follow the following procedure to load an ONNX model and inference with a fixed-size tensor. But today when I ran a benchmark(google benchmark) to test the performance of our model, I found model compilation with shape{1, 500, 80}(
compiledModel_ = core_.compile_model(model);
) takes too much memory thus leading to OOM and the whole program was killed.Acturally I don't know how to deal with this problem. Maybe I shouldn't load ONNX model and compile it to IR in cpp?
Often the mel length varies from 100 to 1300. OOM will happene with mel length >= 500.