opencv / opencv_contrib

Repository for OpenCV's extra modules
Apache License 2.0
9.36k stars 5.75k forks source link

Why can't there be parallel inference when the batch size is greater than 1 in C++ OpenCV CUDA DNN? #3799

Open kj2314 opened 1 week ago

kj2314 commented 1 week ago

Detailed description: I used the C++ version of OpenCV for model inference with a simple convolutional network using the GPU. In release mode, when the batch size is 1, the inference time is 40 ms, but when the batch size is 4, the time is approximately 160 ms. The expectation is that the inference time for the model is 40 ms, whether the batch size is 1 or 4. Why is there no parallel inference? In debug mode, the following error is output:

[ INFO:0@0.535] global registry_parallel.impl.hpp:96 cv::parallel::ParallelBackendRegistry::ParallelBackendRegistry core(parallel): Enabled backends(3, sorted by priority): ONETBB(1000); TBB(990); OPENMP(980) [ INFO:0@0.535] global plugin_loader.impl.hpp:67 cv::plugin::impl::DynamicLib::libraryLoad load D:\code\ISImgDetect\demo\opencv_core_parallel_onetbb490_64d.dll => FAILED [ INFO:0@0.536] global plugin_loader.impl.hpp:67 cv::plugin::impl::DynamicLib::libraryLoad load opencv_core_parallel_onetbb490_64d.dll => FAILED [ INFO:0@0.536] global plugin_loader.impl.hpp:67 cv::plugin::impl::DynamicLib::libraryLoad load D:\code\ISImgDetect\demo\opencv_core_parallel_tbb490_64d.dll => FAILED [ INFO:0@0.537] global plugin_loader.impl.hpp:67 cv::plugin::impl::DynamicLib::libraryLoad load opencv_core_parallel_tbb490_64d.dll => FAILED [ INFO:0@0.537] global plugin_loader.impl.hpp:67 cv::plugin::impl::DynamicLib::libraryLoad load D:\code\ISImgDetect\demo\opencv_core_parallel_openmp490_64d.dll => FAILED [ INFO:0@0.538] global plugin_loader.impl.hpp:67 cv::plugin::impl::DynamicLib::libraryLoad load opencv_core_parallel_openmp490_64d.dll => FAILED [ INFO:0@2.086] global op_cuda.cpp:80 cv::dnn::dnn4_v20231225::Net::Impl::initCUDABackend CUDA backend will fallback to the CPU implementation for the layer "_input" of type NetInputLayer

the layer "_input" of type NetInputLayer be accelerated with GPU, instead using CPU. Why can't the model perform parallel inference? How to solve this problem? pls!

cudawarped commented 6 days ago

The expectation is that the inference time for the model is 40 ms, whether the batch size is 1 or 4

Would you expect the inference time to be 40ms if the batch size was 1,000,000?

I would guess your GPU is already saturated with a batch size of 1.

kj2314 commented 6 days ago

@cudawarped

when the batch size is 1,the GPU usage is 24% and when the batch size is 4 ,the GPU usage is 28%.