Hi, Thanks for your great work. I borrowed some of your code on bev_pool_v2, especially the resigterred op g.op('custom::BEVPoolV2TRT2') + its wrapped class + its cuda implemented code + its python API, all of which can be well embeded to my project, so I can transform .pth file to .onnx successfully. However, when I further transform .onnx to .engine, I met a bug below:
Actually, I followed your instructions to install the TensorRT plugins + MMdeploy, and also import "TensorRT/lib/libtensorrt_ops.so" with ctypes.CDLL(OS_PATH) before parsing .onnx, so I assumed the plugins are already imported. But the bug info shows that the plugin still cannot be found, so is there any step that I missed for importing the plugins? can you share any ideas on it? Thanks.
Hi, Thanks for your great work. I borrowed some of your code on bev_pool_v2, especially the resigterred op g.op('custom::BEVPoolV2TRT2') + its wrapped class + its cuda implemented code + its python API, all of which can be well embeded to my project, so I can transform .pth file to .onnx successfully. However, when I further transform .onnx to .engine, I met a bug below: Actually, I followed your instructions to install the TensorRT plugins + MMdeploy, and also import "TensorRT/lib/libtensorrt_ops.so" with ctypes.CDLL(OS_PATH) before parsing .onnx, so I assumed the plugins are already imported. But the bug info shows that the plugin still cannot be found, so is there any step that I missed for importing the plugins? can you share any ideas on it? Thanks.