linghu8812 / tensorrt_inference

708 stars 208 forks source link

yolov5模型的pt文件转onnx,再转engine后,输出满屏框 #148

Open mvpzhangqiu opened 2 years ago

mvpzhangqiu commented 2 years ago

cnl_1113_

linghu8812 commented 2 years ago

看看ONNX模型里anchors对么?还有config文件中的图像大小

sharoseali commented 2 years ago

Look at the anchors in the ONNX model, right? Also the image size in the config file

In onnx model how to locate anchors, we can just view layers and shapes using Netron. I want to use the Yolov5.engine file. I convert my custom weights (both nano and small model) but I am also getting multiple boxes at the output. By the way, I look into the code. I didn't find that where you are using anchors field anchors: [[10,13], [16,30], [33,23], [30,61], [62,45], [59,119], [116,90], [156,198], [373,326]]
just num_anchors: [3, 3, 3] is accessed in detection.cpp file. Where we can set our anchors exactly to get exact output. @mvpzhangqiu Did you solve the issue ?

mvpzhangqiu commented 2 years ago

Look at the anchors in the ONNX model, right? Also the image size in the config file

In onnx model how to locate anchors, we can just view layers and shapes using Netron. I want to use the Yolov5.engine file. I convert my custom weights (both nano and small model) but I am also getting multiple boxes at the output. By the way, I look into the code. I didn't find that where you are using anchors field anchors: [[10,13], [16,30], [33,23], [30,61], [62,45], [59,119], [116,90], [156,198], [373,326]] just num_anchors: [3, 3, 3] is accessed in detection.cpp file. Where we can set our anchors exactly to get exact output. @mvpzhangqiu Did you solve the issue ?

I solve it and my issue is just about the version of yolo and onnx, not about anchor. Maybe I can't help u.

sharoseali commented 2 years ago

@mvpzhangqiu Can you explain how exactly do you solved it , which version of onnx and yolo correctly works fine, so I can also try the same. May be it solves my issue as well. Thanks for ur reply

linghu8812 commented 2 years ago

@mvpzhangqiu Can you explain how exactly do you solved it , which version of onnx and yolo correctly works fine, so I can also try the same. May be it solves my issue as well. Thanks for ur reply

using Netron to view the ONNX models in https://pan.baidu.com/s/1Ff_SA9Q66DUnZjSipPa74Q#list/path=%2FONNX_models%2Fyolov5_batch1, the link and code are shown in https://github.com/linghu8812/tensorrt_inference/blob/master/README.md#supported-models, and compared them.