Open darkcoder2000 opened 2 years ago
The implementation behind python and C++ interfaces is the same. You will need to check for bugs in your code. For example, you are passing metadata as shapes, instead of the actual shape. Metadata may contain -1
as shape designators, not to mention other things.
You are feeding 512 * 512 * 3 * sizeof(float)
which is a size in bytes, but the overload you use to create the tensor is likely expecting the number of elements. Also, the image resize is using 512x512
, how does it agree with `512 * 512 * 3
?
Is there a C++ reference implementation available that I can follow? The code I am using is working perfectly fine for another ObjectDetection ONNX model (1 input, 4 outpus tensors). The implementation I am using is following the example from here https://github.com/microsoft/onnxruntime-inference-examples/blob/main/c_cxx/model-explorer/model-explorer.cpp
"For example, you are passing metadata as shapes, instead of the actual shape." Not sure what you mean with metadata. I am extracting the input/ouput tensor shapes from the model so that I know what shapes the model is expecting.
Images are in RGB format which need 3 bytes for each pixel. Thats why it fits with 512 512 3. Also when the input tensor size doesn't fit then there will be an error message.
The problem I am seeing is that one ONNX model for ObjectDetection is working in Python and C++ (using the posted c++ code) and another ONNX model for ObjectDetection is working in python but not in C++(using the posted c++ code). And that's what this issue report is about.
I am wondering why this is the case and unfortunatelly there is only very little example code showing how to properly use ONNX in C++.
I am also facing the same issue , I have two models trained on yolov7 one of them works properly on both python and cpp
But, the other model works only on python , but it doesn't show anything on cpp and there is no error as you mentioned
Input and output parameter are same as well
By the have you found any solution for this?
I can load and use a model that has been converted from Pytorch to ONNX with Python ONNX runtime. But using the same model in C++ ONNX runtime is not working properly since it is giving me back strange output tensor shapes. I am not getting any error message.
In Python the output tensor look like this
But in C++ it looks like this
When I run the inference in C++ I am getting these shape back.
Here is the C++ code I am using
System information
OS Platform and Distribution (e.g., Linux Ubuntu 16.04): Ubuntu 20 ONNX Runtime installed from (source or binary): binary ONNX Runtime version: 1.9.0 and 1.11.1 Python version: 3.7.11 Visual Studio version (if applicable): none GCC/Compiler version (if compiling from source): none CUDA/cuDNN version: none GPU model and memory: none
To Reproduce
Unfortunatelly, the model is too big to share it here.