OpenGVLab / InternImage

[CVPR 2023 Highlight] InternImage: Exploring Large-Scale Vision Foundation Models with Deformable Convolutions
https://arxiv.org/abs/2211.05778
MIT License
2.52k stars 234 forks source link

[Bug] Error when infer trt model using Python API (mmdeploy) #196

Open monologuesmw opened 1 year ago

monologuesmw commented 1 year ago

使用mmdeploy转换tensorrt模型后,使用Inference by Model Converter 可以正常推理,并生成结果图像;但使用Python API 进行推理会报如下错误。请问该怎么解决呢?(mmdeploy master分支)

loading libmmdeploy_trt_net.so ... loading libmmdeploy_ort_net.so ... [2023-06-30 07:19:16.008] [mmdeploy] [info] [model.cpp:35] [DirectoryModel] Load model: "/home/project/data/Model/InternImage/20230608_internimage-L_1x" [2023-06-30 07:19:36.276] [mmdeploy] [error] [trt_net.cpp:28] TRTNet: 1: [pluginV2Runner.cpp::load::293] Error Code 1: Serialization (Serialization assertion creator failed.Cannot deserialize plugin since corresponding IPluginCreator not found in Plugin Registry) [2023-06-30 07:19:36.281] [mmdeploy] [error] [trt_net.cpp:28] TRTNet: 4: [runtime.cpp::deserializeCudaEngine::50] Error Code 4: Internal Error (Engine deserialization failed.) [2023-06-30 07:19:36.283] [mmdeploy] [error] [trt_net.cpp:75] failed to deserialize TRT CUDA engine [2023-06-30 07:19:36.299] [mmdeploy] [error] [net_module.cpp:54] Failed to create Net backend: tensorrt, config: { "context": { "device": "<any>", "model": "<any>", "stream": "<any>" }, "input": [ "prep_output" ], "input_map": { "img": "input" }, "is_batched": true, "module": "Net", "name": "cascadercnn", "output": [ "infer_output" ], "output_map": {}, "type": "Task" } [2023-06-30 07:19:36.299] [mmdeploy] [error] [task.cpp:99] error parsing config: { "context": { "device": "<any>", "model": "<any>", "stream": "<any>" }, "input": [ "prep_output" ], "input_map": { "img": "input" }, "is_batched": true, "module": "Net", "name": "cascadercnn", "output": [ "infer_output" ], "output_map": {}, "type": "Task" } Segmentation fault (core dumped)

GitHubYuxiao commented 9 months ago

@monologuesmw hello,I meet the same problem, and I want to use C++ API to deploy the tensorrt engine. Can you achieve it? please reply~