Closed MuhammadAsadJaved closed 3 years ago
As far as I could see from the onnx model, there are only two inputs: [input/lwir_input_data:0, input/input_data:0], they are all of float32[1,416,416,3]. So:
As far as I could see from the onnx model, there are only two inputs: [input/lwir_input_data:0, input/input_data:0], they are all of float32[1,416,416,3]. So:
- do not send , "image_shape":image_size when call session.run()
- reshape to correct shape during processing:
- Your images seems gray scale, so it is of shape [416, 416]? Try some RGB to get [416, 416, 3]
- expand its dim to [1, 416, 416, 3]. Thanks, Lei
I am using RGB image. 416 x 416 because I only printed w and h. I will try do adjust pre-processing step. Thank you
I'm face same issue with my onnx model can anyone help me
onnxruntime.capi.onnxruntime_pybind11_state.InvalidArgument: [ONNXRuntimeError] : 2 : INVALID_ARGUMENT : Got invalid dimensions for input: input for the following indices index: 1 Got: 3 Expected: 1 Please fix either the inputs or the model. I have trained a u2net model with midv500 dataset to built semantic segmentation model so and then use the exported model with rembg library in order to remove background image
this is a exemple of image shape and my onnx input image shape (1440, 2560, 3) onnx input shape [1, 1, 320, 320]
Describe the bug I used the original Yolov3 example and it can run successfully, Then I am using my own Yolov3 (it takes two inputs, visual and infrared image) and I got this error.
Urgency As early as possible
System information
To Reproduce Steps and code: I have converted .pb weights to .onnx using https://github.com/onnx/tensorflow-onnx
with command
python -m tf2onnx.convert --input modelInPb/Pedestrian_yolov3_520.pb --inputs input/input_data:0[1,416,416,3],input/lwir_input_data:0[1,416,416,3] --outputs pred_sbbox/concat_2:0,pred_mbbox/concat_2:0,pred_lbbox/concat_2:0 --output modelOut/Pedestrian_yolov3_520.onnx --opset 11
Then I use this .onnx model with the following code:
https://drive.google.com/file/d/1vT5ZPH-LuW5cGrdENjWb2uOhvJBygSSk/view?usp=sharing
Expected behavior import the .onnx model and it should show the output same as the official Yolov3 example
Screenshots Attached screenshots for the error
Additional context I am also not sure if I am using in a right way
boxes, scores, indices = session.run(outname, {inname: image_data , lwir_inname: image_data, "image_shape":image_size})
This model takes two inputs, so How I can pas two inputs? (visible image and infrared image), Can I use one image_size for both or I need to pass separately for both? The original Yolov3 example is![Screenshot from 2020-11-16 11-47-12](https://user-images.githubusercontent.com/28862708/99210877-eced8700-2801-11eb-8d52-3de63bc8bfcd.png)
boxes, scores, indices = session.run(outname, {inname: image_data , "image_shape":image_size})
How I can change this example for two inputs?