JDAI-CV / fast-reid

SOTA Re-identification Methods and Toolbox
Apache License 2.0
3.42k stars 837 forks source link

CPUExecutionprovider faster than CudaExecutionProvider when inferencing model on test images #679

Closed mnurilmi closed 2 years ago

mnurilmi commented 2 years ago

Hi, I want to ask about the onnx model in onnxruntime. I've tried the onnx model MGN model and the results are surprising. When I use CPUExecution provider it is faster than CUDAExecution provider. It should be the other way around in theory. Can you explain it? maybe there are some steps that i don't know...thanks for your attention, hopefully this issue can be responded

mnurilmi commented 2 years ago

it's solved... maybe at first time, onnxruntime needs to allocate memory and these steps take a lot of time. So cuda execution provider is faster than cpu on large batch. Now, i wanna enable mixed precision, maybe it give shorter inference time