DerryHub / BEVFormer_tensorrt

BEVFormer inference on TensorRT, including INT8 Quantization and Custom TensorRT Plugins (float/half/half2/int8).
Apache License 2.0
410 stars 67 forks source link

A bug when transferring onnx to trt #79

Open Son-Goku-gpu opened 10 months ago

Son-Goku-gpu commented 10 months ago

Hi, Thanks for your great work. I borrowed some of your code on bev_pool_v2, especially the resigterred op g.op('custom::BEVPoolV2TRT2') + its wrapped class + its cuda implemented code + its python API, all of which can be well embeded to my project, so I can transform .pth file to .onnx successfully. However, when I further transform .onnx to .engine, I met a bug below: image Actually, I followed your instructions to install the TensorRT plugins + MMdeploy, and also import "TensorRT/lib/libtensorrt_ops.so" with ctypes.CDLL(OS_PATH) before parsing .onnx, so I assumed the plugins are already imported. But the bug info shows that the plugin still cannot be found, so is there any step that I missed for importing the plugins? can you share any ideas on it? Thanks.

qingwan7 commented 10 months ago

How do you build the environment for exporting onnx? Can you share it?