Closed kindle-the-life closed 1 year ago
My test result is 30ms(image preprocess+model infer+ resize to original size) , and onnxruntime took 32ms. Environment:(notebook) i9-12900H, RTX 2050 4G.
谢谢,已收到您的来信,我将在最快的时间给您答复。 朱雷
@kindle-the-life I found your code only work on CPU, because the input data you created did not transfer to GPU. I hope it's could help you
谢谢,已收到您的来信,我将在最快的时间给您答复。 朱雷
I wrote the interface for model inference following the logic of the predict function in the efficientad.py However, in actual testing, it was found that the average time was close to 1 second (excluding the model loading time), and it was also very slow after converting to the ONNX model
Device information: CPU: Intel(R) Core(TM) i7-10700K CPU @ 3.80GHz GPU: NVIDIA GeForce RTX 3080 16G
This is the code tested with the pth model:
This is the test code for inference with the ONNX model:
This is a function of converting the PTH model to the ONNX model: