open-mmlab / mmdeploy

OpenMMLab Model Deployment Framework
https://mmdeploy.readthedocs.io/en/latest/
Apache License 2.0
2.79k stars 638 forks source link

[Bug] I can't deploy detector with tensorRT #2080

Closed CerPhd closed 1 year ago

CerPhd commented 1 year ago

Checklist

Describe the bug

I already manage to deploy detector and pose estimator with onnx. Also with TensorRT I manage to deploy the pose estimator. When I try to deploy Rtm-nano on tensorRT with cuda I have this error.

Reproduction

python tools/deploy.py configs/mmdet/detection/detection_tensorrt_static-320x320.py C:\Users\lucac\miniconda3\envs\mmdeploy/mmpose/projects/rtmpose/rtmdet/person/rtmdet_nano_320-8xb32_coco-person.py https://download.openmmlab.com/mmpose/v1/projects/rtmpose/rtmdet_nano_8xb32-100e_coco-obj365-person-05d8511e.pth demo/resources/human-pose.jpg --work-dir rtmpose-trt/rtmpose-nano --device cuda:0

Environment

- mmengine - INFO - **********Environmental information**********
C:\Users\lucac\miniconda3\envs\mmdeploy\lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: Could not find module 'C:\Users\lucac\miniconda3\envs\mmdeploy\Lib\site-packages\torchvision\image.pyd' (or one of its dependencies). Try using the full path with constructor syntax.
  warn(f"Failed to load image Python extension: {e}")
05/15 22:37:53 - mmengine - INFO - sys.platform: win32
05/15 22:37:53 - mmengine - INFO - Python: 3.8.16 | packaged by conda-forge | (default, Feb  1 2023, 15:53:35) [MSC v.1929 64 bit (AMD64)]
05/15 22:37:53 - mmengine - INFO - CUDA available: True
05/15 22:37:53 - mmengine - INFO - numpy_random_seed: 2147483648
05/15 22:37:53 - mmengine - INFO - GPU 0: NVIDIA GeForce RTX 3080 Laptop GPU
05/15 22:37:53 - mmengine - INFO - CUDA_HOME: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.7
05/15 22:37:53 - mmengine - INFO - NVCC: Cuda compilation tools, release 11.7, V11.7.64
05/15 22:37:53 - mmengine - INFO - MSVC: Microsoft (R) C/C++ Optimizing Compiler versione 19.33.31630 per x64
05/15 22:37:53 - mmengine - INFO - GCC: n/a
05/15 22:37:53 - mmengine - INFO - PyTorch: 1.12.1+cu113
05/15 22:37:53 - mmengine - INFO - PyTorch compiling details: PyTorch built with:
  - C++ Version: 199711
  - MSVC 192829337
  - Intel(R) Math Kernel Library Version 2020.0.2 Product Build 20200624 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
  - OpenMP 2019
  - LAPACK is enabled (usually provided by MKL)
  - CPU capability usage: AVX2
  - CUDA Runtime 11.3
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86;-gencode;arch=compute_37,code=compute_37
  - CuDNN 8.3.2  (built against CUDA 11.5)
  - Magma 2.5.4
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.3.2, CXX_COMPILER=C:/actions-runner/_work/pytorch/pytorch/builder/windows/tmp_bin/sccache-cl.exe, CXX_FLAGS=/DWIN32 /D_WINDOWS /GR /EHsc /w /bigobj -DUSE_PTHREADPOOL -openmp:experimental -IC:/actions-runner/_work/pytorch/pytorch/builder/windows/mkl/include -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_FBGEMM -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.12.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=OFF, USE_OPENMP=ON, USE_ROCM=OFF,

05/15 22:37:53 - mmengine - INFO - TorchVision: 0.13.1+cu113
05/15 22:37:53 - mmengine - INFO - OpenCV: 4.7.0
05/15 22:37:53 - mmengine - INFO - MMEngine: 0.7.3
05/15 22:37:53 - mmengine - INFO - MMCV: 2.0.0
05/15 22:37:53 - mmengine - INFO - MMCV Compiler: MSVC 192829924
05/15 22:37:53 - mmengine - INFO - MMCV CUDA Compiler: 11.3
05/15 22:37:53 - mmengine - INFO - MMDeploy: 1.0.0+162f4cb
05/15 22:37:53 - mmengine - INFO -

05/15 22:37:53 - mmengine - INFO - **********Backend information**********
05/15 22:37:53 - mmengine - INFO - tensorrt:    8.2.3.0
05/15 22:37:53 - mmengine - INFO - tensorrt custom ops: NotAvailable
05/15 22:37:53 - mmengine - INFO - ONNXRuntime: 1.8.1
05/15 22:37:53 - mmengine - INFO - ONNXRuntime-gpu:     1.14.1
05/15 22:37:53 - mmengine - INFO - ONNXRuntime custom ops:      NotAvailable
05/15 22:37:53 - mmengine - INFO - pplnn:       None
05/15 22:37:53 - mmengine - INFO - ncnn:        None
05/15 22:37:53 - mmengine - INFO - snpe:        None
05/15 22:37:53 - mmengine - INFO - openvino:    None
05/15 22:37:53 - mmengine - INFO - torchscript: 1.12.1+cu113
05/15 22:37:53 - mmengine - INFO - torchscript custom ops:      NotAvailable
05/15 22:37:53 - mmengine - INFO - rknn-toolkit:        None
05/15 22:37:53 - mmengine - INFO - rknn-toolkit2:       None
05/15 22:37:53 - mmengine - INFO - ascend:      None
05/15 22:37:53 - mmengine - INFO - coreml:      None
05/15 22:37:53 - mmengine - INFO - tvm: None
05/15 22:37:53 - mmengine - INFO - vacc:        None
05/15 22:37:53 - mmengine - INFO -

05/15 22:37:53 - mmengine - INFO - **********Codebase information**********
05/15 22:37:53 - mmengine - INFO - mmdet:       3.0.0
05/15 22:37:53 - mmengine - INFO - mmseg:       None
05/15 22:37:53 - mmengine - INFO - mmpretrain:  None
05/15 22:37:53 - mmengine - INFO - mmocr:       None
05/15 22:37:53 - mmengine - INFO - mmedit:      None
05/15 22:37:53 - mmengine - INFO - mmdet3d:     None
05/15 22:37:53 - mmengine - INFO - mmpose:      1.0.0
05/15 22:37:53 - mmengine - INFO - mmrotate:    None
05/15 22:37:53 - mmengine - INFO - mmaction:    None
05/15 22:37:53 - mmengine - INFO - mmrazor:     None

Error traceback

/2023-22:52:28] [TRT] [I] [MemUsageSnapshot] End constructing builder kernel library: CPU 11457 MiB, GPU 1344 MiB
[05/15/2023-22:52:28] [TRT] [I] ----------------------------------------------------------------
[05/15/2023-22:52:28] [TRT] [I] Input filename:   rtmpose-trt/rtm-nano\end2end.onnx
[05/15/2023-22:52:28] [TRT] [I] ONNX IR version:  0.0.6
[05/15/2023-22:52:28] [TRT] [I] Opset version:    11
[05/15/2023-22:52:28] [TRT] [I] Producer name:    pytorch
[05/15/2023-22:52:28] [TRT] [I] Producer version: 1.12.1
[05/15/2023-22:52:28] [TRT] [I] Domain:
[05/15/2023-22:52:28] [TRT] [I] Model version:    0
[05/15/2023-22:52:28] [TRT] [I] Doc string:
[05/15/2023-22:52:28] [TRT] [I] ----------------------------------------------------------------
[05/15/2023-22:52:28] [TRT] [W] onnx2trt_utils.cpp:366: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[05/15/2023-22:52:29] [TRT] [I] No importer registered for op: TRTBatchedNMS. Attempting to import as plugin.
[05/15/2023-22:52:29] [TRT] [I] Searching for plugin: TRTBatchedNMS, plugin_version: 1, plugin_namespace:
[05/15/2023-22:52:29] [TRT] [E] ModelImporter.cpp:773: While parsing node number 452 [TRTBatchedNMS -> "onnx::Reshape_1364"]:
[05/15/2023-22:52:29] [TRT] [E] ModelImporter.cpp:774: --- Begin node ---
[05/15/2023-22:52:29] [TRT] [E] ModelImporter.cpp:775: input: "mmdeploy::TRTBatchedNMS_1363"
input: "y.24"
output: "onnx::Reshape_1364"
output: "onnx::Reshape_1365"
name: "TRTBatchedNMS_452"
op_type: "TRTBatchedNMS"
attribute {
  name: "background_label_id"
  i: -1
  type: INT
}
attribute {
  name: "clip_boxes"
  i: 0
  type: INT
}
attribute {
  name: "iou_threshold"
  f: 0.6
  type: FLOAT
}
attribute {
  name: "is_normalized"
  i: 0
  type: INT
}
attribute {
  name: "keep_topk"
  i: 100
  type: INT
}
attribute {
  name: "num_classes"
  i: 1
  type: INT
}
attribute {
  name: "return_index"
  i: 0
  type: INT
}
attribute {
  name: "score_threshold"
  f: 0.05
  type: FLOAT
}
attribute {
  name: "topk"
  i: 5000
  type: INT
}
domain: "mmdeploy"

[05/15/2023-22:52:29] [TRT] [E] ModelImporter.cpp:776: --- End node ---
[05/15/2023-22:52:29] [TRT] [E] ModelImporter.cpp:779: ERROR: builtin_op_importers.cpp:4870 In function importFallbackPluginImporter:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
Process Process-3:
Traceback (most recent call last):
  File "C:\Users\lucac\miniconda3\envs\mmdeploy\lib\multiprocessing\process.py", line 315, in _bootstrap
    self.run()
  File "C:\Users\lucac\miniconda3\envs\mmdeploy\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\lucac\miniconda3\envs\mmdeploy\mmdeploy\apis\core\pipeline_manager.py", line 107, in __call__
    ret = func(*args, **kwargs)
  File "C:\Users\lucac\miniconda3\envs\mmdeploy\mmdeploy\apis\utils\utils.py", line 98, in to_backend
    return backend_mgr.to_backend(
  File "C:\Users\lucac\miniconda3\envs\mmdeploy\mmdeploy\backend\tensorrt\backend_manager.py", line 127, in to_backend
    onnx2tensorrt(
  File "C:\Users\lucac\miniconda3\envs\mmdeploy\mmdeploy\backend\tensorrt\onnx2tensorrt.py", line 79, in onnx2tensorrt
    from_onnx(
  File "C:\Users\lucac\miniconda3\envs\mmdeploy\mmdeploy\backend\tensorrt\utils.py", line 185, in from_onnx
    raise RuntimeError(f'Failed to parse onnx, {error_msgs}')
RuntimeError: Failed to parse onnx, In node 452 (importFallbackPluginImporter): UNSUPPORTED_NODE: Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"

05/15 22:52:29 - mmengine - ERROR - C:\Users\lucac\miniconda3\envs\mmdeploy\mmdeploy\apis\core\pipeline_manager.py - pop_mp_output - 80 - `mmdeploy.apis.utils.utils.to_backend` with Call id: 1 failed. exit.
RunningLeon commented 1 year ago

@CerPhd hi, you have not built tensorrt custom ops as it shows in env 05/15 22:37:53 - mmengine - INFO - tensorrt custom ops: NotAvailable. You could build from source or use prebuilt package. Pls. refer to https://github.com/open-mmlab/mmdeploy/blob/main/docs/en/02-how-to-run/prebuilt_package_windows.md

CerPhd commented 1 year ago

thank you very much! I manage to deploy the detector, but now I can't inference video using tensorRT. I will create a new issue.

goldentimejsky commented 11 months ago

Hi @RunningLeon, I met the same problem that tensorrt custom ops: NotAvailable. Then I found the doc here and run the command python tools/scripts/build_ubuntu_x64_ort.py. But it is still NotAvailable. Here is my env:

12/10 00:38:10 - mmengine - INFO - 

12/10 00:38:10 - mmengine - INFO - **********Environmental information**********
12/10 00:38:11 - mmengine - INFO - sys.platform: linux
12/10 00:38:11 - mmengine - INFO - Python: 3.8.10 (default, Jun  4 2021, 15:09:15) [GCC 7.5.0]
12/10 00:38:11 - mmengine - INFO - CUDA available: True
12/10 00:38:11 - mmengine - INFO - numpy_random_seed: 2147483648
12/10 00:38:11 - mmengine - INFO - GPU 0: NVIDIA GeForce RTX 3090
12/10 00:38:11 - mmengine - INFO - CUDA_HOME: /usr/local/cuda
12/10 00:38:11 - mmengine - INFO - NVCC: Cuda compilation tools, release 11.3, V11.3.109
12/10 00:38:11 - mmengine - INFO - GCC: gcc (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0
12/10 00:38:11 - mmengine - INFO - PyTorch: 1.10.0+cu113
12/10 00:38:11 - mmengine - INFO - PyTorch compiling details: PyTorch built with:
  - GCC 7.3
  - C++ Version: 201402
  - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.2.3 (Git Hash 7336ca9f055cf1bfa13efb658fe15dc9b41f0740)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX512
  - CUDA Runtime 11.3
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
  - CuDNN 8.2
  - Magma 2.5.2
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.2.0, CXX_COMPILER=/opt/rh/devtoolset-7/root/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.10.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, 

12/10 00:38:11 - mmengine - INFO - TorchVision: 0.11.1+cu113
12/10 00:38:11 - mmengine - INFO - OpenCV: 4.8.0
12/10 00:38:11 - mmengine - INFO - MMEngine: 0.8.3
12/10 00:38:11 - mmengine - INFO - MMCV: 2.0.1
12/10 00:38:11 - mmengine - INFO - MMCV Compiler: GCC 9.3
12/10 00:38:11 - mmengine - INFO - MMCV CUDA Compiler: 11.3
12/10 00:38:11 - mmengine - INFO - MMDeploy: 1.3.0+660af62
12/10 00:38:11 - mmengine - INFO - 

12/10 00:38:11 - mmengine - INFO - **********Backend information**********
12/10 00:38:11 - mmengine - INFO - tensorrt:    8.6.1.post1
12/10 00:38:11 - mmengine - INFO - tensorrt custom ops: NotAvailable
12/10 00:38:11 - mmengine - INFO - ONNXRuntime: 1.8.1
12/10 00:38:11 - mmengine - INFO - ONNXRuntime-gpu:     None
12/10 00:38:11 - mmengine - INFO - ONNXRuntime custom ops:      Available
12/10 00:38:11 - mmengine - INFO - pplnn:       None
12/10 00:38:11 - mmengine - INFO - ncnn:        None
12/10 00:38:11 - mmengine - INFO - snpe:        None
12/10 00:38:11 - mmengine - INFO - openvino:    None
12/10 00:38:11 - mmengine - INFO - torchscript: 1.10.0+cu113
12/10 00:38:11 - mmengine - INFO - torchscript custom ops:      NotAvailable
12/10 00:38:11 - mmengine - INFO - rknn-toolkit:        None
12/10 00:38:11 - mmengine - INFO - rknn-toolkit2:       None
12/10 00:38:11 - mmengine - INFO - ascend:      None
12/10 00:38:11 - mmengine - INFO - coreml:      None
12/10 00:38:11 - mmengine - INFO - tvm: None
12/10 00:38:11 - mmengine - INFO - vacc:        None
12/10 00:38:11 - mmengine - INFO - 

12/10 00:38:11 - mmengine - INFO - **********Codebase information**********
12/10 00:38:11 - mmengine - INFO - mmdet:       3.1.0
12/10 00:38:11 - mmengine - INFO - mmseg:       None
12/10 00:38:11 - mmengine - INFO - mmpretrain:  None
12/10 00:38:11 - mmengine - INFO - mmocr:       None
12/10 00:38:11 - mmengine - INFO - mmagic:      None
12/10 00:38:11 - mmengine - INFO - mmdet3d:     None
12/10 00:38:11 - mmengine - INFO - mmpose:      None
12/10 00:38:11 - mmengine - INFO - mmrotate:    None
12/10 00:38:11 - mmengine - INFO - mmaction:    None
12/10 00:38:11 - mmengine - INFO - mmrazor:     None
12/10 00:38:11 - mmengine - INFO - mmyolo:      None
goldentimejsky commented 11 months ago

And there is the original bug report:

12/10 00:12:43 - mmengine - INFO - Execute onnx optimize passes.
12/10 00:12:50 - mmengine - INFO - Finish pipeline mmdeploy.apis.pytorch2onnx.torch2onnx
12/10 00:12:54 - mmengine - INFO - Start pipeline mmdeploy.apis.utils.utils.to_backend in subprocess
12/10 00:12:54 - mmengine - WARNING - Could not load the library of tensorrt plugins.             Because the file does not exist: 
[12/10/2023-00:12:54] [TRT] [I] [MemUsageChange] Init CUDA: CPU +13, GPU +0, now: CPU 93, GPU 262 (MiB)
[12/10/2023-00:13:02] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +1445, GPU +268, now: CPU 1614, GPU 530 (MiB)
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message.  If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 927699213
[12/10/2023-00:13:03] [TRT] [I] ----------------------------------------------------------------
[12/10/2023-00:13:03] [TRT] [I] Input filename:   mmdeploy_models/mmdet/ort/dino_fp16/end2end.onnx
[12/10/2023-00:13:03] [TRT] [I] ONNX IR version:  0.0.7
[12/10/2023-00:13:03] [TRT] [I] Opset version:    11
[12/10/2023-00:13:03] [TRT] [I] Producer name:    pytorch
[12/10/2023-00:13:03] [TRT] [I] Producer version: 1.10
[12/10/2023-00:13:03] [TRT] [I] Domain:           
[12/10/2023-00:13:03] [TRT] [I] Model version:    0
[12/10/2023-00:13:03] [TRT] [I] Doc string:       
[12/10/2023-00:13:03] [TRT] [I] ----------------------------------------------------------------
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:604] Reading dangerously large protocol message.  If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons.  To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h.
[libprotobuf WARNING google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 927699213
[12/10/2023-00:13:04] [TRT] [W] onnx2trt_utils.cpp:374: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[12/10/2023-00:13:04] [TRT] [W] onnx2trt_utils.cpp:400: One or more weights outside the range of INT32 was clamped
[12/10/2023-00:13:04] [TRT] [I] No importer registered for op: TRTInstanceNormalization. Attempting to import as plugin.
[12/10/2023-00:13:04] [TRT] [I] Searching for plugin: TRTInstanceNormalization, plugin_version: 1, plugin_namespace: 
[12/10/2023-00:13:04] [TRT] [E] 3: getPluginCreator could not find plugin: TRTInstanceNormalization version: 1
[12/10/2023-00:13:04] [TRT] [E] ModelImporter.cpp:771: While parsing node number 4110 [TRTInstanceNormalization -> "10302"]:
[12/10/2023-00:13:04] [TRT] [E] ModelImporter.cpp:772: --- Begin node ---
[12/10/2023-00:13:04] [TRT] [E] ModelImporter.cpp:773: input: "10299"
input: "10300"
input: "10301"
output: "10302"
name: "TRTInstanceNormalization_4110"
op_type: "TRTInstanceNormalization"
attribute {
  name: "epsilon"
  f: 1e-05
  type: FLOAT
}
domain: "mmdeploy"

[12/10/2023-00:13:04] [TRT] [E] ModelImporter.cpp:774: --- End node ---
[12/10/2023-00:13:04] [TRT] [E] ModelImporter.cpp:777: ERROR: builtin_op_importers.cpp:5404 In function importFallbackPluginImporter:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
Process Process-3:
Traceback (most recent call last):
  File "/root/miniconda3/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/root/miniconda3/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/root/code/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
    ret = func(*args, **kwargs)
  File "/root/code/mmdeploy/mmdeploy/apis/utils/utils.py", line 98, in to_backend
    return backend_mgr.to_backend(
  File "/root/code/mmdeploy/mmdeploy/backend/tensorrt/backend_manager.py", line 127, in to_backend
    onnx2tensorrt(
  File "/root/code/mmdeploy/mmdeploy/backend/tensorrt/onnx2tensorrt.py", line 79, in onnx2tensorrt
    from_onnx(
  File "/root/code/mmdeploy/mmdeploy/backend/tensorrt/utils.py", line 185, in from_onnx
    raise RuntimeError(f'Failed to parse onnx, {error_msgs}')
RuntimeError: Failed to parse onnx, In node 4110 (importFallbackPluginImporter): UNSUPPORTED_NODE: Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"

12/10 00:13:05 - mmengine - ERROR - /root/code/mmdeploy/mmdeploy/apis/core/pipeline_manager.py - pop_mp_output - 80 - `mmdeploy.apis.utils.utils.to_backend` with Call id: 1 failed. exit.
goldentimejsky commented 11 months ago

I solved it.