WongKinYiu / yolov7

Implementation of paper - YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors
GNU General Public License v3.0
13.19k stars 4.17k forks source link

The inference results of the pretrained models yolov7.pt and yolov7.trt are inconsistent #540

Open leayz-888 opened 2 years ago

leayz-888 commented 2 years ago

I tested some images of the coco validation set and got the following test results: Inference result of yolov7.pt: 000000263463 Inference result of yolov7.trt(fp16): 000000263463 Inference result of yolov7.trt(fp32): 000000263463 I integrated NMS into the onnx model, and I want to know why there is a big difference in the inference results? Looking forward to your reply! The command for model conversion is: python export.py --weights ./yolov7.pt --grid --simplify --include-nms python export.py -o ./yolov7.onnx -e ./yolov7-nms.trt -p fp16

ghost commented 2 years ago

My guess would be the simplyfied command. Im not sure if its necessary but i didnt use it for my conversion and it worked and i also didnt use --grid

mgodbole1729 commented 2 years ago

@theRandString what output does that model return though? It returns 3 values instead of 7 it says

PraveenRaja2 commented 1 month ago

I tested some images of the coco validation set and got the following test results: Inference result of yolov7.pt: 000000263463 Inference result of yolov7.trt(fp16): 000000263463 Inference result of yolov7.trt(fp32): 000000263463 I integrated NMS into the onnx model, and I want to know why there is a big difference in the inference results? Looking forward to your reply! The command for model conversion is: python export.py --weights ./yolov7.pt --grid --simplify --include-nms python export.py -o ./yolov7.onnx -e ./yolov7-nms.trt -p fp16

can you please share how did you run inference using yolov7.trt?