DerryHub / BEVFormer_tensorrt

BEVFormer inference on TensorRT, including INT8 Quantization and Custom TensorRT Plugins (float/half/half2/int8).
Apache License 2.0
425 stars 69 forks source link

the shape inference of mmcv::MMCVModulatedDeformConv2d type is missing #42

Closed TGpastor closed 1 year ago

TGpastor commented 1 year ago

when I use"sh samples/bevformer/base/pth2onnx.sh -d ${gpu_id}",it warns"the shape inference of mmcv::MMCVModulatedDeformConv2d type is missing",how can I fix it?(it can still finish the convert, but I guess the structure of modle will be influenced)

DerryHub commented 1 year ago

ONNX does not support this operator, but does not affect TensorRT conversions and inference.

TGpastor commented 1 year ago

thank you, my supervisor just told me that I need to deploy bevformer in different kinds of Embedded development boards, so I think I need to convert pth to onnx by me self,still thank you for your excellent work,though I can't use it.(by the way, could you please tell me where can I find the Network structure file for bevformer? The official checkpoints are saved as dict and I can't convert them to onnx)

DerryHub commented 1 year ago

Maybe you can create the model and load the dict file with load_state_dict, then save it with pickle.

TGpastor commented 1 year ago

thanks, I know I need to use "model.load_state_dict()", but I just don't know where to import the model or how to built it(I think bevformer project saved the model structure as file,so I can just import the model from file.But I can't find it, or I should built the whole model structure before I use "model.load_state_dict()"?)

DerryHub commented 1 year ago

You can try to save the model in tool/bevformer/evaluate_pth.py before inference.

TGpastor commented 1 year ago

It works, thank you for helping me so much, now the last thing is use "torch.onnx.export" to start the convert, but I need to input data with correct shape and type, I tried to input a tensor in shape of (1,6,3,736,1280), but it says "TypeError: img_metas must be a list, but got <class 'torch.Tensor'>"(I used "bevformer_small.py" as the config file to built the model), maybe I should use tuple to input the data , could you please tell me the shape and type for the input data?

DerryHub commented 1 year ago

The shapes are in bevformer_small_trt.py. You can try sample/bevformer/small/pth2onnx.sh.

TGpastor commented 1 year ago

But I remember you said your onnx model"can only be used as an intermediate model for the tensorrt engine.",so can I use "bevformer_small.py" to build the model and use the shapes in "bevformer_small_trt.py"?

DerryHub commented 1 year ago

bevformer_small.py cannot be converted to ONNX but 'bevformer_small_trt.py'. You can try to implement the unsupported ops of the ONNX file in ONNXRuntime like tensorrt plugins.

TGpastor commented 1 year ago

Thanks, I'll try it later.

TGpastor commented 1 year ago

sorry to bother you again, I finally evaluate the trt model,now I want to save the result and visualize it, how can I done it?

DerryHub commented 1 year ago

https://developer.nvidia.com/blog/exploring-tensorrt-engines-with-trex/ may help you.

TGpastor commented 1 year ago

thank you, I mean, I want to visualize the bbox result of the evaluation( to draw the bbox on the picture), I try to use "mmcv.dump(bbox_results['bbox_results'], '/root/autodl-tmp/BEVFormer/results_nusc.json')"to save the bbox, but it says "list indices must be integers or slices, not str", how can I correctlly get the bbox result from the model and draw it on the picture?

DerryHub commented 1 year ago

Maybe you can use the nusenes tool to parse it. But I don't know about it.