Open vedrusss opened 3 years ago
Hi The convertor works on my side. (with 544x960), please install the latest code and try again. (mmdetection update the roi extract code to support onnx, break my code. I have fix it a few days ago)
And If you want to set the opt_shape_param in CLI:
mmdet2trt --save-engine=true \
--min-scale 1 3 544 960 \
--opt-scale 1 3 544 960 \
--max-scale 1 3 544 960 \
mmdetection/configs/dcn/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py \
pytorch_models/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco_20200203-3b2f0594.pth \
pytorch_models/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco_20200203-3b2f0594_converted_fp32.pth
While running DCNv2 model convertion to TRT I'm getting following error:
I want to decrease model inference time, that's why I've modified mmdet2trt.py script - added following opt_shape_param onto line 321 (all sizes are of mod32):
Running following command to get the error above:
mmdet2trt --save-engine=true mmdetection/configs/dcn/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco.py pytorch_models/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco_20200203-3b2f0594.pth pytorch_models/cascade_rcnn_r101_fpn_dconv_c3-c5_1x_coco_20200203-3b2f0594_converted_fp32.pth
Notice: all that works fine if I use bigger sizes, for example [1,3,608,1088]. Why smaller sizes don't work?
enviroment: the converter is run within the Docker provided in the project.
Additional context Looks like there is some issue with padding during calibration step.