open-mmlab / mmdeploy

OpenMMLab Model Deployment Framework
https://mmdeploy.readthedocs.io/en/latest/
Apache License 2.0
2.75k stars 631 forks source link

[Bug] 使用deploy.py 部署mmpose/demo/mmdetection_cfg/ssdlite_mobilenetv2_scratch_600e_onehand.py 成tensorrt报错 #2062

Closed Freedom-JJ closed 1 year ago

Freedom-JJ commented 1 year ago

Checklist

Describe the bug

使用deploy.py 部署mmpose/demo/mmdetection_cfg/ssdlite_mobilenetv2_scratch_600e_onehand.py 成tensorrt报错

Reproduction

  1. python tools/deploy.py configs/mmdet/detection/detection_tensorrt_dynamic-64x64-608x608.py ../mmpose/demo/mmdetection_cfg/ssdlite_mobilenetv2_scratch_600e_onehand.py https://download.openmmlab.com/mmdetection/v2.0/ssd/ssdlite_mobilenetv2_scratch_600e_coco/ssdlite_mobilenetv2_scratch_600e_coco_20210629_110627-974d9307.pth /home/data/jiangdehong/human.png --device cuda --work-dir /home/data/jiangdehong/module_gpu/mmdet/ssdlite/ --dump-info
  2. python tools/deploy.py configs/mmdet/detection/detection_tensorrt_static-320x320.py ../mmpose/demo/mmdetection_cfg/ssdlite_mobilenetv2_scratch_600e_onehand.py https://download.openmmlab.com/mmdetection/v2.0/ssd/ssdlite_mobilenetv2_scratch_600e_coco/ssdlite_mobilenetv2_scratch_600e_coco_20210629_110627-974d9307.pth /home/data/jiangdehong/human.png --device cuda --work-dir /home/data/jiangdehong/module_gpu/mmdet/ssdlite/ --dump-info

Environment

05/08 01:02:16 - mmengine - INFO - 

05/08 01:02:16 - mmengine - INFO - **********Environmental information**********
05/08 01:02:17 - mmengine - INFO - sys.platform: linux
05/08 01:02:17 - mmengine - INFO - Python: 3.8.13 | packaged by conda-forge | (default, Mar 25 2022, 05:56:18) [GCC 10.3.0]
05/08 01:02:17 - mmengine - INFO - CUDA available: True
05/08 01:02:17 - mmengine - INFO - numpy_random_seed: 2147483648
05/08 01:02:17 - mmengine - INFO - GPU 0: Orin
05/08 01:02:17 - mmengine - INFO - CUDA_HOME: /usr/local/cuda-11.4
05/08 01:02:17 - mmengine - INFO - NVCC: Cuda compilation tools, release 11.4, V11.4.315
05/08 01:02:17 - mmengine - INFO - GCC: gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
05/08 01:02:17 - mmengine - INFO - PyTorch: 1.11.0
05/08 01:02:17 - mmengine - INFO - PyTorch compiling details: PyTorch built with:
  - GCC 9.4
  - C++ Version: 201402
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: NO AVX
  - CUDA Runtime 11.4
  - NVCC architecture flags: -gencode;arch=compute_72,code=sm_72;-gencode;arch=compute_87,code=sm_87
  - CuDNN 8.6
    - Built with CuDNN 8.3.2
  - Build settings: BLAS_INFO=open, BUILD_TYPE=Release, CUDA_VERSION=11.4, CUDNN_VERSION=8.3.2, CXX_COMPILER=/usr/bin/c++, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -DEDGE_PROFILER_USE_KINETO -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, FORCE_FALLBACK_CUDA_MPI=1, LAPACK_INFO=open, TORCH_VERSION=1.11.0, USE_CUDA=ON, USE_CUDNN=ON, USE_EIGEN_FOR_BLAS=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=OFF, USE_MKLDNN=OFF, USE_MPI=ON, USE_NCCL=0, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, 

05/08 01:02:17 - mmengine - INFO - TorchVision: 0.12.0
05/08 01:02:17 - mmengine - INFO - OpenCV: 4.7.0
05/08 01:02:17 - mmengine - INFO - MMEngine: 0.7.3
05/08 01:02:17 - mmengine - INFO - MMCV: 2.0.0
05/08 01:02:17 - mmengine - INFO - MMCV Compiler: GCC 9.4
05/08 01:02:17 - mmengine - INFO - MMCV CUDA Compiler: 11.4
05/08 01:02:17 - mmengine - INFO - MMDeploy: 1.0.0+840adcf
05/08 01:02:17 - mmengine - INFO - 

05/08 01:02:17 - mmengine - INFO - **********Backend information**********
05/08 01:02:17 - mmengine - INFO - tensorrt:    8.5.2.2
05/08 01:02:17 - mmengine - INFO - tensorrt custom ops: Available
05/08 01:02:17 - mmengine - INFO - ONNXRuntime: None
05/08 01:02:17 - mmengine - INFO - pplnn:       None
05/08 01:02:17 - mmengine - INFO - ncnn:        None
05/08 01:02:17 - mmengine - INFO - snpe:        None
05/08 01:02:17 - mmengine - INFO - openvino:    None
05/08 01:02:17 - mmengine - INFO - torchscript: 1.11.0
05/08 01:02:17 - mmengine - INFO - torchscript custom ops:      NotAvailable
05/08 01:02:18 - mmengine - INFO - rknn-toolkit:        None
05/08 01:02:18 - mmengine - INFO - rknn-toolkit2:       None
05/08 01:02:18 - mmengine - INFO - ascend:      None
05/08 01:02:18 - mmengine - INFO - coreml:      None
05/08 01:02:18 - mmengine - INFO - tvm: None
05/08 01:02:18 - mmengine - INFO - vacc:        None
05/08 01:02:18 - mmengine - INFO - 

05/08 01:02:18 - mmengine - INFO - **********Codebase information**********
05/08 01:02:18 - mmengine - INFO - mmdet:       3.0.0
05/08 01:02:18 - mmengine - INFO - mmseg:       None
05/08 01:02:18 - mmengine - INFO - mmpretrain:  None
05/08 01:02:18 - mmengine - INFO - mmocr:       None
05/08 01:02:18 - mmengine - INFO - mmedit:      None
05/08 01:02:18 - mmengine - INFO - mmdet3d:     None
05/08 01:02:18 - mmengine - INFO - mmpose:      1.0.0
05/08 01:02:18 - mmengine - INFO - mmrotate:    None
05/08 01:02:18 - mmengine - INFO - mmaction:    None
05/08 01:02:18 - mmengine - INFO - mmrazor:     None

Error traceback

No response

mm-assistant[bot] commented 1 year ago

We recommend using English or English & Chinese for issues so that we could have broader discussion.

Freedom-JJ commented 1 year ago

我已经尝试了各种配置文件,但是都报错 Traceback (most recent call last): File "/home/a/archiconda3/envs/test/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/home/a/archiconda3/envs/test/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/a/jiangdehong/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__ ret = func(*args, **kwargs) File "/home/a/jiangdehong/mmdeploy/mmdeploy/apis/pytorch2onnx.py", line 98, in torch2onnx export( File "/home/a/jiangdehong/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 356, in _wrap return self.call_function(func_name_, *args, **kwargs) File "/home/a/jiangdehong/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function return self.call_function_local(func_name, *args, **kwargs) File "/home/a/jiangdehong/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local return pipe_caller(*args, **kwargs) File "/home/a/jiangdehong/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__ ret = func(*args, **kwargs) File "/home/a/jiangdehong/mmdeploy/mmdeploy/apis/onnx/export.py", line 131, in export torch.onnx.export( File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/onnx/__init__.py", line 305, in export return utils.export(model, args, f, export_params, verbose, training, File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/onnx/utils.py", line 118, in export _export(model, args, f, export_params, verbose, training, input_names, output_names, File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/onnx/utils.py", line 719, in _export _model_to_graph(model, args, verbose, input_names, File "/home/a/jiangdehong/mmdeploy/mmdeploy/apis/onnx/optimizer.py", line 11, in model_to_graph__custom_optimizer graph, params_dict, torch_out = ctx.origin_func(*args, **kwargs) File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/onnx/utils.py", line 499, in _model_to_graph graph, params, torch_out, module = _create_jit_graph(model, args) File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/onnx/utils.py", line 440, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/onnx/utils.py", line 391, in _trace_and_get_graph_from_model torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True) File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/jit/_trace.py", line 1166, in _get_trace_graph outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs) File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/jit/_trace.py", line 127, in forward graph, out = torch._C._create_graph_by_tracing( File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/jit/_trace.py", line 118, in wrapper outs.append(self.inner(*trace_inputs)) File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1098, in _slow_forward result = self.forward(*input, **kwargs) File "/home/a/jiangdehong/mmdeploy/mmdeploy/apis/onnx/export.py", line 123, in wrapper return forward(*arg, **kwargs) File "/home/a/jiangdehong/mmdeploy/mmdeploy/codebase/mmdet/models/detectors/single_stage.py", line 89, in single_stage_detector__forward return __forward_impl(self, batch_inputs, data_samples=data_samples) File "/home/a/jiangdehong/mmdeploy/mmdeploy/core/optimizers/function_marker.py", line 266, in g rets = f(*args, **kwargs) File "/home/a/jiangdehong/mmdeploy/mmdeploy/codebase/mmdet/models/detectors/single_stage.py", line 24, in __forward_impl output = self.bbox_head.predict(x, data_samples, rescale=False) File "/home/a/jiangdehong/mmdetection/mmdet/models/dense_heads/base_dense_head.py", line 197, in predict predictions = self.predict_by_feat( File "/home/a/jiangdehong/mmdeploy/mmdeploy/codebase/mmdet/models/dense_heads/base_dense_head.py", line 145, in base_dense_head__predict_by_feat max_scores, _ = nms_pre_score[..., :-1].max(-1) IndexError: max(): Expected reduction dim 2 to have non-zero size. 05/08 00:48:02 - mmengine - ERROR - /home/a/jiangdehong/mmdeploy/mmdeploy/apis/core/pipeline_manager.py - pop_mp_output - 80 -mmdeploy.apis.pytorch2onnx.torch2onnxwith Call id: 0 failed. exit.

Freedom-JJ commented 1 year ago

十分感谢大家

rohansaw commented 8 months ago

Did you find a solution to this? You marked the issue as completed, however, no solution is apparent from the thread. I am facing the same problem, so it seems to be still open.