Closed sonOfAnton29 closed 1 year ago
Hi, I've only done this on C++, but just by reading that I believe that your input shape might be incorrect. Please be aware that unless if you export your .onnx with the --dynamic flag it will have a fixed input shape of the model unlike the .pt models. Please use the Netron viewer and investigate your (.onnx) model to see what the input size is.
For example:
python3 export.py \
--weights yolov5s.pt \
--img 640 \
--simplify \
--optimize \
--include onnx
Netron viewer:
So now I know that my OpenCV::dnn::fromONNX(myModel.onnx) will want an input size of 640x640
I hope that this helps you out!
Please check : #10304 You need to change export.py : do_constant_folding=True, If you do this, then the OpenCV 4.6.0 DNN reading of the onnx model will work.
Please check : #10304 You need to change export.py : do_constant_folding=True, If you do this, then the OpenCV 4.6.0 DNN reading of the onnx model will work.
it was already set to true but thanks for reply
@JustasBart thank you. the input size wasn't the problem I used your export code and it worked. I think optimize or simplify did the trick but I don't know how
@sonOfAnton29 I'm glad to hear it, cheers! :rocket:
I have the same problem you meet. Who can help to solve the problem
Just a suggestion. Using the export to onnx of the standard yolov5 implementation gives a working onnx model if you use the onnxruntime software. I tested this with CPU & GPU and the performance is very good. Onnxruntime supports python & C++ inference, i tested the python version at this moment. What i did was combining opencv for image preprocessing & display, but used onnxruntime for the model inference. The yolov5s model runs about 15FPS on a corei7 10850H cpu. I did the same test using a RTX 2080Ti GPU, and the raw inference performance is 211FPS! Personally i would just avoid dnn from OpenCv and use onnxruntime for model inference.
@wvalcke agreed. Avoid DNN and use ONNX directly.
👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.
Access additional YOLOv5 🚀 resources:
Access additional Ultralytics ⚡ resources:
Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!
Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!
Search before asking
YOLOv5 Component
Detection, Export
Bug
hello. I recently faced an issue while working on yolov5. I trained a model in yolov5 small weight format, using yolov5s.pt as initial weights. but after I exported that weight file to ONNX, cv2.dnn.readNetFromONNX() can't read it and raises this exception:
I tried yolov5m.pt and some other options such as fp16 format export but it does not work. I would appreciate your help
Environment
windows 10 cuda available python 3.8.8 opencv-python 4.6.0
Minimal Reproducible Example
No response
Additional
No response
Are you willing to submit a PR?