open-mmlab / mmdeploy

OpenMMLab Model Deployment Framework
https://mmdeploy.readthedocs.io/en/latest/
Apache License 2.0
2.79k stars 637 forks source link

mmdetection mask-rcnn_convnext-v2-b_fpn_lsj-3x-fcmae_coco model convert to onnx got issue #2669

Open tg-em2ai opened 9 months ago

tg-em2ai commented 9 months ago

Checklist

Describe the bug

There is an issue when I trying to convert the mmdetection mask-rcnn_convnext-v2-b_fpn_lsj-3x-fcmae_coco model to onnx.

Reproduction

This is the command I use to run the mmdeploy to onnx script.

python tools/deploy.py configs/mmdet/instance-seg/instance-seg_onnxruntime_dynamic.py \
/app/data/mask-rcnn_convnext-v2-b_fpn_lsj-3x-fcmae_coco/mask-rcnn_convnext-v2-b_fpn_lsj-3x-fcmae_coco.py \
/app/data/mask-rcnn_convnext-v2-b_fpn_lsj-3x-fcmae_coco/epoch_70.pth \
/app/data/mask-rcnn_convnext-v2-b_fpn_lsj-3x-fcmae_coco/Img_0002_1.png \
--work-dir /app/data/mask-rcnn_convnext-v2-b_fpn_lsj-3x-fcmae_coco/onnx --dump-info --device cpu

No modification did on the codes and config.

Environment

02/14 09:14:14 - mmengine - INFO -

02/14 09:14:14 - mmengine - INFO - **********Environmental information**********
/bin/sh: 1: /usr/local/cuda/bin/nvcc: not found
/bin/sh: 1: /usr/local/cuda/bin/nvcc: not found
02/14 09:14:15 - mmengine - INFO - sys.platform: linux
02/14 09:14:15 - mmengine - INFO - Python: 3.8.18 (default, Sep 11 2023, 13:40:15) [GCC 11.2.0]
02/14 09:14:15 - mmengine - INFO - CUDA available: True
02/14 09:14:15 - mmengine - INFO - numpy_random_seed: 2147483648
02/14 09:14:15 - mmengine - INFO - GPU 0,1: Quadro RTX 6000
02/14 09:14:15 - mmengine - INFO - CUDA_HOME: /usr/local/cuda
02/14 09:14:15 - mmengine - INFO - NVCC: Not Available
02/14 09:14:15 - mmengine - INFO - GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
02/14 09:14:15 - mmengine - INFO - PyTorch: 1.12.1+cu113
02/14 09:14:15 - mmengine - INFO - PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201402
  - Intel(R) Math Kernel Library Version 2020.0.0 Product Build 20191122 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.6.0 (Git Hash 52b5f107dd9cf10910aaa19cb47f3abf9b349815)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - CUDA Runtime 11.3
  - NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_86,code=sm_86
  - CuDNN 8.3.2  (built against CUDA 11.5)
  - Magma 2.5.2
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CUDA_VERSION=11.3, CUDNN_VERSION=8.3.2, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_VERSION=1.12.1, USE_CUDA=ON, USE_CUDNN=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=OFF, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF,

02/14 09:14:15 - mmengine - INFO - TorchVision: 0.13.1+cu113
02/14 09:14:15 - mmengine - INFO - OpenCV: 4.8.1
02/14 09:14:15 - mmengine - INFO - MMEngine: 0.9.1
02/14 09:14:15 - mmengine - INFO - MMCV: 2.0.1
02/14 09:14:15 - mmengine - INFO - MMCV Compiler: GCC 9.3
02/14 09:14:15 - mmengine - INFO - MMCV CUDA Compiler: 11.3
02/14 09:14:15 - mmengine - INFO - MMDeploy: 1.1.0+
02/14 09:14:15 - mmengine - INFO -

02/14 09:14:15 - mmengine - INFO - **********Backend information**********
02/14 09:14:15 - mmengine - INFO - tensorrt:    None
02/14 09:14:15 - mmengine - INFO - ONNXRuntime: 1.16.1
02/14 09:14:15 - mmengine - INFO - ONNXRuntime-gpu:     None
02/14 09:14:15 - mmengine - INFO - ONNXRuntime custom ops:      NotAvailable
02/14 09:14:15 - mmengine - INFO - pplnn:       None
02/14 09:14:15 - mmengine - INFO - ncnn:        None
02/14 09:14:15 - mmengine - INFO - snpe:        None
02/14 09:14:15 - mmengine - INFO - openvino:    None
02/14 09:14:15 - mmengine - INFO - torchscript: 1.12.1+cu113
02/14 09:14:15 - mmengine - INFO - torchscript custom ops:      NotAvailable
02/14 09:14:15 - mmengine - INFO - rknn-toolkit:        None
02/14 09:14:15 - mmengine - INFO - rknn-toolkit2:       None
02/14 09:14:15 - mmengine - INFO - ascend:      None
02/14 09:14:15 - mmengine - INFO - coreml:      None
02/14 09:14:15 - mmengine - INFO - tvm: None
02/14 09:14:15 - mmengine - INFO - vacc:        None
02/14 09:14:15 - mmengine - INFO -

02/14 09:14:15 - mmengine - INFO - **********Codebase information**********
02/14 09:14:15 - mmengine - INFO - mmdet:       3.1.0
02/14 09:14:15 - mmengine - INFO - mmseg:       None
02/14 09:14:15 - mmengine - INFO - mmpretrain:  1.2.0
02/14 09:14:15 - mmengine - INFO - mmocr:       None
02/14 09:14:15 - mmengine - INFO - mmagic:      None
02/14 09:14:15 - mmengine - INFO - mmdet3d:     None
02/14 09:14:15 - mmengine - INFO - mmpose:      None
02/14 09:14:15 - mmengine - INFO - mmrotate:    None
02/14 09:14:15 - mmengine - INFO - mmaction:    None
02/14 09:14:15 - mmengine - INFO - mmrazor:     None

Error traceback

Traceback (most recent call last):
  File "/root/anaconda3/envs/mmdeploy110/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/root/anaconda3/envs/mmdeploy110/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/app/workspace/mmdeploy-1.1.0/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
    ret = func(*args, **kwargs)
  File "/app/workspace/mmdeploy-1.1.0/mmdeploy/apis/pytorch2onnx.py", line 98, in torch2onnx
    export(
  File "/app/workspace/mmdeploy-1.1.0/mmdeploy/apis/core/pipeline_manager.py", line 356, in _wrap
    return self.call_function(func_name_, *args, **kwargs)
  File "/app/workspace/mmdeploy-1.1.0/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function
    return self.call_function_local(func_name, *args, **kwargs)
  File "/app/workspace/mmdeploy-1.1.0/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local
    return pipe_caller(*args, **kwargs)
  File "/app/workspace/mmdeploy-1.1.0/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
    ret = func(*args, **kwargs)
  File "/app/workspace/mmdeploy-1.1.0/mmdeploy/apis/onnx/export.py", line 131, in export
    torch.onnx.export(
  File "/root/anaconda3/envs/mmdeploy110/lib/python3.8/site-packages/torch/onnx/__init__.py", line 350, in export
    return utils.export(
  File "/root/anaconda3/envs/mmdeploy110/lib/python3.8/site-packages/torch/onnx/utils.py", line 163, in export
    _export(
  File "/root/anaconda3/envs/mmdeploy110/lib/python3.8/site-packages/torch/onnx/utils.py", line 1074, in _export
    graph, params_dict, torch_out = _model_to_graph(
  File "/app/workspace/mmdeploy-1.1.0/mmdeploy/apis/onnx/optimizer.py", line 27, in model_to_graph__custom_optimizer
    graph, params_dict, torch_out = ctx.origin_func(*args, **kwargs)
  File "/root/anaconda3/envs/mmdeploy110/lib/python3.8/site-packages/torch/onnx/utils.py", line 727, in _model_to_graph
    graph, params, torch_out, module = _create_jit_graph(model, args)
  File "/root/anaconda3/envs/mmdeploy110/lib/python3.8/site-packages/torch/onnx/utils.py", line 602, in _create_jit_graph
    graph, torch_out = _trace_and_get_graph_from_model(model, args)
  File "/root/anaconda3/envs/mmdeploy110/lib/python3.8/site-packages/torch/onnx/utils.py", line 517, in _trace_and_get_graph_from_model
    trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
  File "/root/anaconda3/envs/mmdeploy110/lib/python3.8/site-packages/torch/jit/_trace.py", line 1175, in _get_trace_graph
    outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
  File "/root/anaconda3/envs/mmdeploy110/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/anaconda3/envs/mmdeploy110/lib/python3.8/site-packages/torch/jit/_trace.py", line 127, in forward
    graph, out = torch._C._create_graph_by_tracing(
  File "/root/anaconda3/envs/mmdeploy110/lib/python3.8/site-packages/torch/jit/_trace.py", line 118, in wrapper
    outs.append(self.inner(*trace_inputs))
  File "/root/anaconda3/envs/mmdeploy110/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
    return forward_call(*input, **kwargs)
  File "/root/anaconda3/envs/mmdeploy110/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1118, in _slow_forward
    result = self.forward(*input, **kwargs)
  File "/app/workspace/mmdeploy-1.1.0/mmdeploy/apis/onnx/export.py", line 123, in wrapper
    return forward(*arg, **kwargs)
  File "/app/workspace/mmdeploy-1.1.0/mmdeploy/codebase/mmdet/models/detectors/two_stage.py", line 92, in two_stage_detector__forward
    output = self.roi_head.predict(
  File "/app/workspace/mmdetection_3.1.0/mmdet/models/roi_heads/base_roi_head.py", line 118, in predict
    results_list = self.predict_bbox(
  File "/app/workspace/mmdeploy-1.1.0/mmdeploy/codebase/mmdet/models/roi_heads/standard_roi_head.py", line 62, in standard_roi_head__predict_bbox
    result_list = self.bbox_head.predict_by_feat(
  File "/app/workspace/mmdeploy-1.1.0/mmdeploy/codebase/mmdet/models/roi_heads/bbox_head.py", line 137, in bbox_head__predict_by_feat
    dets, labels = multiclass_nms(
  File "/app/workspace/mmdeploy-1.1.0/mmdeploy/core/optimizers/function_marker.py", line 266, in g
    rets = f(*args, **kwargs)
  File "/app/workspace/mmdeploy-1.1.0/mmdeploy/mmcv/ops/nms.py", line 510, in multiclass_nms
    raise NotImplementedError(f'Unsupported nms type: {nms_type}.')
NotImplementedError: Unsupported nms type: soft_nms.
02/14 08:48:31 - mmengine - ERROR - /app/workspace/mmdeploy-1.1.0/mmdeploy/apis/core/pipeline_manager.py - pop_mp_output - 80 - `mmdeploy.apis.pytorch2onnx.torch2onnx` with Call id: 0 failed. exit.
xzh929 commented 8 months ago

same question,have you solved it?