tinyvision / DAMO-YOLO

DAMO-YOLO: a fast and accurate object detection method with some new techs, including NAS backbones, efficient RepGFPN, ZeroHead, AlignedOTA, and distillation enhancement.
Apache License 2.0
3.75k stars 470 forks source link

TensorRT doesn't provide the same output as Torch model #102

Closed duchieuphan2k1 closed 1 year ago

duchieuphan2k1 commented 1 year ago

Before Asking

Search before asking

Question

I have converted the torch model to tensorRT model using end2end converter. python tools/converter.py -f configs/damoyolo_tinynasL45_L.py -c best.pth --batch_size 1 --img_size 1024 --trt --end2end --trt_eval

This command worked normally but the evaluation is 0% , compare to 90% when i using torch model I also use demo command to predict some images. The output show that the bounding box seem randomly.

I also try convert to onnx by this command: python tools/converter.py -f configs/damoyolo_tinynasL45_L.py -c best.pth --batch_size 1 --img_size 1024 The Onnx model output exactly the same output as torch model

So do i miss any configuration for tensorRT?

Additional

No response

jyqi commented 1 year ago

Hello, currently the End2End NMS module is only compatible with TensorRT-7.2.1.4. Please verify if the TensorRT version you are using is consistent. If it is not, you may consider modifying the TensorRT version or exporting it using a non-End2End TensorRT and implementing NMS post-processing using Python.

ategen3rt commented 11 months ago

I have run into this issue as well. The issue is that the TRT8 export has passed the wrong value for box_coding to TRT::EfficientNMS_TRT. I've confirmed that PR #113 fixes the issue. It changes box_coding from 1 (BoxCenterSize) to 0 (BoxCorner). See https://github.com/NVIDIA/TensorRT/tree/release/8.6/plugin/efficientNMSPlugin for more information on the parameters.