/workdir/mmdeploy-workdir/test-deploy-img-1024.jpg is just any radom image size 1024*1024
Environment
07/06 02:18:59 - mmengine - INFO - TorchVision: 0.14.0
07/06 02:18:59 - mmengine - INFO - OpenCV: 4.8.0
07/06 02:18:59 - mmengine - INFO - MMEngine: 0.7.4
07/06 02:18:59 - mmengine - INFO - MMCV: 2.0.0rc4
07/06 02:18:59 - mmengine - INFO - MMCV Compiler: GCC 9.3
07/06 02:18:59 - mmengine - INFO - MMCV CUDA Compiler: 11.6
07/06 02:18:59 - mmengine - INFO - MMDeploy: 1.2.0+ae381c8
07/06 02:18:59 - mmengine - INFO -
07/06 02:18:59 - mmengine - INFO - **********Backend information**********
07/06 02:18:59 - mmengine - INFO - tensorrt: 8.2.4.2
07/06 02:18:59 - mmengine - INFO - tensorrt custom ops: Available
07/06 02:18:59 - mmengine - INFO - ONNXRuntime: None
07/06 02:18:59 - mmengine - INFO - ONNXRuntime-gpu: 1.8.1
07/06 02:18:59 - mmengine - INFO - ONNXRuntime custom ops: Available
07/06 02:18:59 - mmengine - INFO - pplnn: None
07/06 02:18:59 - mmengine - INFO - ncnn: None
07/06 02:18:59 - mmengine - INFO - snpe: None
07/06 02:18:59 - mmengine - INFO - openvino: None
07/06 02:18:59 - mmengine - INFO - torchscript: 1.13.0
07/06 02:18:59 - mmengine - INFO - torchscript custom ops: NotAvailable
07/06 02:18:59 - mmengine - INFO - rknn-toolkit: None
07/06 02:18:59 - mmengine - INFO - rknn-toolkit2: None
07/06 02:18:59 - mmengine - INFO - ascend: None
07/06 02:18:59 - mmengine - INFO - coreml: None
07/06 02:18:59 - mmengine - INFO - tvm: None
07/06 02:18:59 - mmengine - INFO - vacc: None
07/06 02:19:00 - mmengine - INFO -
07/06 02:19:00 - mmengine - INFO - **********Codebase information**********
07/06 02:19:00 - mmengine - INFO - mmdet: 3.1.0
07/06 02:19:00 - mmengine - INFO - mmseg: None
07/06 02:19:00 - mmengine - INFO - mmpretrain: None
07/06 02:19:00 - mmengine - INFO - mmocr: None
07/06 02:19:00 - mmengine - INFO - mmagic: None
07/06 02:19:00 - mmengine - INFO - mmdet3d: None
07/06 02:19:00 - mmengine - INFO - mmpose: None
07/06 02:19:00 - mmengine - INFO - mmrotate: None
07/06 02:19:00 - mmengine - INFO - mmaction: None
07/06 02:19:00 - mmengine - INFO - mmrazor: None
07/06 02:19:00 - mmengine - INFO - mmyolo: None
Error traceback
07/06 02:12:51 - mmengine - WARNING - DeprecationWarning: get_onnx_config will be deprecated in the future.
07/06 02:12:51 - mmengine - INFO - Export PyTorch model to ONNX: /workdir/mmdeploy-workdir/dyhead_swin_1024.onnx.
07/06 02:12:51 - mmengine - WARNING - Can not find mmdet.models.utils.transformer.PatchMerging.forward, function rewrite will not be applied
/root/workspace/mmdeploy/mmdeploy/codebase/mmdet/models/detectors/single_stage.py:84: TracerWarning: Iterating over a tensor might cause the trace
to be incorrect. Passing a tensor of different shape won't change the number of iterations executed (and might lead to errors or silently give incorrect results).
...
/opt/conda/lib/python3.8/site-packages/torch/onnx/utils.py:1178: UserWarning: The shape inference of mmdeploy::TRTBatchedNMS type is missing, so it
may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function. (Triggered internally at /opt/conda/co
nda-bld/pytorch_1666642991888/work/torch/csrc/jit/passes/onnx/shape_type_inference.cpp:1884.)
_C._jit_pass_onnx_graph_shape_type_inference(
07/06 02:13:37 - mmengine - INFO - Execute onnx optimize passes.
07/06 02:13:46 - mmengine - INFO - Finish pipeline mmdeploy.apis.pytorch2onnx.torch2onnx
07/06 02:13:48 - mmengine - INFO - Start pipeline mmdeploy.apis.utils.utils.to_backend in subprocess
07/06 02:13:48 - mmengine - INFO - Successfully loaded tensorrt plugins from /root/workspace/mmdeploy/mmdeploy/lib/libmmdeploy_tensorrt_ops.so
[07/06/2023-02:13:48] [TRT] [I] [MemUsageChange] Init CUDA: CPU +457, GPU +0, now: CPU 548, GPU 511 (MiB)
[07/06/2023-02:13:48] [TRT] [I] [MemUsageSnapshot] Begin constructing builder kernel library: CPU 548 MiB, GPU 511 MiB
[07/06/2023-02:13:48] [TRT] [I] [MemUsageSnapshot] End constructing builder kernel library: CPU 702 MiB, GPU 555 MiB
[libprotobuf WARNING /home/jenkins/agent/workspace/OSS/OSS_L0_MergeRequest/oss/build/third_party.protobuf/src/third_party.protobuf/src/google/proto
buf/io/coded_stream.cc:604] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will
be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobu
f/io/coded_stream.h.
[libprotobuf WARNING /home/jenkins/agent/workspace/OSS/OSS_L0_MergeRequest/oss/build/third_party.protobuf/src/third_party.protobuf/src/google/proto
buf/io/coded_stream.cc:81] The total number of bytes read was 909753679
[07/06/2023-02:13:49] [TRT] [I] ----------------------------------------------------------------
[07/06/2023-02:13:49] [TRT] [I] Input filename: /workdir/mmdeploy-workdir/dyhead_swin_1024.onnx
[07/06/2023-02:13:49] [TRT] [I] ONNX IR version: 0.0.6
[07/06/2023-02:13:49] [TRT] [I] Opset version: 11
[07/06/2023-02:13:49] [TRT] [I] Producer name: pytorch
[07/06/2023-02:13:49] [TRT] [I] Producer version: 1.13.0
[07/06/2023-02:13:49] [TRT] [I] Domain:
[07/06/2023-02:13:49] [TRT] [I] Model version: 0
[07/06/2023-02:13:49] [TRT] [I] Doc string:
[07/06/2023-02:13:49] [TRT] [I] ----------------------------------------------------------------
[libprotobuf WARNING /home/jenkins/agent/workspace/OSS/OSS_L0_MergeRequest/oss/build/third_party.protobuf/src/third_party.protobuf/src/google/proto
buf/io/coded_stream.cc:604] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will
be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobu
f/io/coded_stream.h.
[libprotobuf WARNING /home/jenkins/agent/workspace/OSS/OSS_L0_MergeRequest/oss/build/third_party.protobuf/src/third_party.protobuf/src/google/protobuf/io/coded_stream.cc:81] The total number of bytes read was 909753679 [0/1842]
[07/06/2023-02:13:49] [TRT] [W] parsers/onnx/onnx2trt_utils.cpp:364: Your ONNX model has been generated with INT64 weights, while TensorRT does not natively support INT64. Attempting to cast down to INT32.
[07/06/2023-02:13:49] [TRT] [I] No importer registered for op: Xor. Attempting to import as plugin.
[07/06/2023-02:13:49] [TRT] [I] Searching for plugin: Xor, plugin_version: 1, plugin_namespace:
[07/06/2023-02:13:49] [TRT] [E] parsers/onnx/ModelImporter.cpp:780: While parsing node number 41 [Xor -> "/backbone/stages.0/blocks.0/attn/Xor_output_0"]:
[07/06/2023-02:13:49] [TRT] [E] parsers/onnx/ModelImporter.cpp:781: --- Begin node ---
[07/06/2023-02:13:49] [TRT] [E] parsers/onnx/ModelImporter.cpp:782: input: "/backbone/stages.0/blocks.0/attn/Constant_5_output_0"
input: "/backbone/stages.0/blocks.0/attn/Constant_5_output_0"
output: "/backbone/stages.0/blocks.0/attn/Xor_output_0"
name: "/backbone/stages.0/blocks.0/attn/Xor"
op_type: "Xor"
[07/06/2023-02:13:49] [TRT] [E] parsers/onnx/ModelImporter.cpp:783: --- End node ---
[07/06/2023-02:13:49] [TRT] [E] parsers/onnx/ModelImporter.cpp:785: ERROR: parsers/onnx/builtin_op_importers.cpp:4870 In function importFallbackPluginImporter:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
Process Process-3:
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/opt/conda/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
ret = func(*args, **kwargs)
File "/root/workspace/mmdeploy/mmdeploy/apis/utils/utils.py", line 98, in to_backend
return backend_mgr.to_backend(
File "/root/workspace/mmdeploy/mmdeploy/backend/tensorrt/backend_manager.py", line 127, in to_backend
onnx2tensorrt(
File "/root/workspace/mmdeploy/mmdeploy/backend/tensorrt/onnx2tensorrt.py", line 79, in onnx2tensorrt
from_onnx(
File "/root/workspace/mmdeploy/mmdeploy/backend/tensorrt/utils.py", line 185, in from_onnx
raise RuntimeError(f'Failed to parse onnx, {error_msgs}')
RuntimeError: Failed to parse onnx, In node 41 (importFallbackPluginImporter): UNSUPPORTED_NODE: Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
07/06 02:13:49 - mmengine - ERROR - /root/workspace/mmdeploy/mmdeploy/apis/core/pipeline_manager.py - pop_mp_output - 80 - `mmdeploy.apis.utils.utils.to_backend` with Call id: 1 failed. exit.
Checklist
Describe the bug
Try to convert pth to tensorrt for Dyhead, but have the below error:
No importer registered for op: Xor. Attempting to import as plugin
When I use mmdeploy 0.x with mmdet 2.x, the conversion was success
Reproduction
python3 tools/deploy.py /workdir/mmdeploy-workdir/detection_onnx_static_1024x1024.py /workdir/atss_swin-l-p4-w12_fpn_dyhead_mstrain_2x_coco_original.yaml /workdir/swin_large_patch4_window12_384_22k.pth /workdir/mmdeploy-workdir/test-deploy-img-1024.jpg --work-dir /workdir/mmdeploy-workdir --device cuda --dump-info
detection_onnx_static_1024x1024.py is like this
/workdir/atss_swin-l-p4-w12_fpn_dyhead_mstrain_2x_coco_original.yaml as below
/workdir/swin_large_patch4_window12_384_22k.pth is https://github.com/SwinTransformer/storage/releases/download/v1.0.0/swin_large_patch4_window12_384_22k.pth
/workdir/mmdeploy-workdir/test-deploy-img-1024.jpg is just any radom image size 1024*1024
Environment
Error traceback