open-mmlab / mmdeploy

OpenMMLab Model Deployment Framework
https://mmdeploy.readthedocs.io/en/latest/
Apache License 2.0
2.79k stars 638 forks source link

[Bug] rtmpose convert to ncnn backend model error:"unsupported split axis" "mmdeploy::AdaptiveAvgPool2d type is missing"" #2079

Closed RachelLYY closed 1 year ago

RachelLYY commented 1 year ago

Checklist

Describe the bug

image image image

Reproduction

cd mmdeploy

# convert rtmpose model to ncnn model with static shape

python tools/deploy.py configs/mmpose/pose-detection_ncnn_static-256x192.py \
../mmpose/projects/rtmpose/rtmpose/body_2d_keypoint/rtmpose-t_8xb256-420e_coco-256x192.py \
rtmpose-tiny_simcc-aic-coco_pt-aic-coco_420e-256x192-cfc8f33d_20230126.pth demo/resources/human-pose.jpg \
--work-dir mmdeploy_models/rtmpose --device cpu --show --dump-info

Environment

05/15 10:31:28 - mmengine - INFO - 

05/15 10:31:28 - mmengine - INFO - **********Environmental information**********
05/15 10:31:29 - mmengine - INFO - sys.platform: linux
05/15 10:31:29 - mmengine - INFO - Python: 3.8.16 (default, Mar  2 2023, 03:21:46) [GCC 11.2.0]
05/15 10:31:29 - mmengine - INFO - CUDA available: False
05/15 10:31:29 - mmengine - INFO - numpy_random_seed: 2147483648
05/15 10:31:29 - mmengine - INFO - GCC: gcc (Ubuntu 11.3.0-1ubuntu1~22.04) 11.3.0
05/15 10:31:29 - mmengine - INFO - PyTorch: 2.0.0
05/15 10:31:29 - mmengine - INFO - PyTorch compiling details: PyTorch built with:
  - GCC 9.3
  - C++ Version: 201703
  - Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications
  - Intel(R) MKL-DNN v2.7.3 (Git Hash 6dbeffbae1f23cbbeae17adb7b5b13f1f37c080e)
  - OpenMP 201511 (a.k.a. OpenMP 4.5)
  - LAPACK is enabled (usually provided by MKL)
  - NNPACK is enabled
  - CPU capability usage: AVX2
  - Build settings: BLAS_INFO=mkl, BUILD_TYPE=Release, CXX_COMPILER=/opt/rh/devtoolset-9/root/usr/bin/c++, CXX_FLAGS= -D_GLIBCXX_USE_CXX11_ABI=0 -fabi-version=11 -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -DNDEBUG -DUSE_KINETO -DLIBKINETO_NOCUPTI -DLIBKINETO_NOROCTRACER -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DSYMBOLICATE_MOBILE_DEBUG_HANDLE -O2 -fPIC -Wall -Wextra -Werror=return-type -Werror=non-virtual-dtor -Werror=bool-operation -Wnarrowing -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wunused-local-typedefs -Wno-unused-parameter -Wno-unused-function -Wno-unused-result -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Werror=cast-function-type -Wno-stringop-overflow, LAPACK_INFO=mkl, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, TORCH_DISABLE_GPU_ASSERTS=ON, TORCH_VERSION=2.0.0, USE_CUDA=0, USE_CUDNN=OFF, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=OFF, USE_NNPACK=ON, USE_OPENMP=ON, USE_ROCM=OFF, 

05/15 10:31:29 - mmengine - INFO - TorchVision: 0.15.0
05/15 10:31:29 - mmengine - INFO - OpenCV: 4.7.0
05/15 10:31:29 - mmengine - INFO - MMEngine: 0.7.0
05/15 10:31:29 - mmengine - INFO - MMCV: 2.0.0rc4
05/15 10:31:29 - mmengine - INFO - MMCV Compiler: GCC 11.3
05/15 10:31:29 - mmengine - INFO - MMCV CUDA Compiler: not available
05/15 10:31:29 - mmengine - INFO - MMDeploy: 1.0.0rc3+26b66ef
05/15 10:31:29 - mmengine - INFO - 

05/15 10:31:29 - mmengine - INFO - **********Backend information**********
05/15 10:31:29 - mmengine - INFO - tensorrt: None
05/15 10:31:29 - mmengine - INFO - ONNXRuntime: 1.8.1
05/15 10:31:29 - mmengine - INFO - ONNXRuntime-gpu: None
05/15 10:31:29 - mmengine - INFO - ONNXRuntime custom ops: Available
05/15 10:31:29 - mmengine - INFO - pplnn: None
05/15 10:31:29 - mmengine - INFO - ncnn: 1.0.20230223
05/15 10:31:29 - mmengine - INFO - ncnn custom ops: NotAvailable
05/15 10:31:29 - mmengine - INFO - snpe: None
05/15 10:31:29 - mmengine - INFO - openvino: None
05/15 10:31:29 - mmengine - INFO - torchscript: 2.0.0
05/15 10:31:29 - mmengine - INFO - torchscript custom ops: NotAvailable
05/15 10:31:29 - mmengine - INFO - rknn-toolkit: None
05/15 10:31:29 - mmengine - INFO - rknn-toolkit2: None
05/15 10:31:29 - mmengine - INFO - ascend: None
05/15 10:31:29 - mmengine - INFO - coreml: None
05/15 10:31:29 - mmengine - INFO - tvm: None
05/15 10:31:29 - mmengine - INFO - 

05/15 10:31:29 - mmengine - INFO - **********Codebase information**********
05/15 10:31:29 - mmengine - INFO - mmdet: 3.0.0rc6
05/15 10:31:29 - mmengine - INFO - mmseg: None
05/15 10:31:29 - mmengine - INFO - mmcls: None
05/15 10:31:29 - mmengine - INFO - mmocr: None
05/15 10:31:29 - mmengine - INFO - mmedit: None
05/15 10:31:29 - mmengine - INFO - mmdet3d: None
05/15 10:31:29 - mmengine - INFO - mmpose: 1.0.0rc1
05/15 10:31:29 - mmengine - INFO - mmrotate: None
05/15 10:31:29 - mmengine - INFO - mmaction: None

Error traceback

No response

RunningLeon commented 1 year ago

@RachelLYY hi, have you changed any thing in deploy config and model config? for rtmpose, you should use configs/mmpose/pose-detection_simcc_ncnn_static-256x192.py

RachelLYY commented 1 year ago

@RachelLYY hi, have you changed any thing in deploy config and model config? for rtmpose, you should use configs/mmpose/pose-detection_simcc_ncnn_static-256x192.py

Hi, I didn't change anything in the config file. I just tried run deploy.py using configs/mmpose/pose-detection_simcc_ncnn_static-256x192.py, but still got the same error as I posted above. But I change site-packages/mmdeploy/backend/ncnn/onnx2ncnn.py line 71 to specify onnx2ncnn_path, cause the original code get_onnx2ncnn_path() returns ''. Screenshot from 2023-05-15 14-39-04 I set onnx2ncnn_path to mmdeploy_onnx2ncnn which is the output of make in ncnn Custom Ops in this tutorial.

RunningLeon commented 1 year ago

Something wrong with your env. You could use docker image

create container

docker pull openmmlab/mmdeploy:ubuntu20.04-cuda11.3-mmdeploy1.0.0
docker run -it --rm --gpus=all openmmlab/mmdeploy:ubuntu20.04-cuda11.3-mmdeploy1.0.0
RachelLYY commented 1 year ago

Hi, I accessed this Docker container by executing the command docker run -it openmmlab/mmdeploy:ubuntu20.04-cuda11.3-mmdeploy1.0.0. However, when attempting to run python3 tools/deploy.py using the CPU device, I encountered the following error: ImportError: cannot import name 'cfg_apply_marks' from 'mmdeploy.utils'. Additionally, what's wrong with my original env, is it because that ncnn custom ops not available?

RunningLeon commented 1 year ago

ncnn custom ops not available?

Hi,

  1. you have to install codebase in docker container, eg: python3 -m mim install 'mmpose>=1.0.0'
  2. ncnn custom ops should be included. It's already built in the docker image.
RachelLYY commented 1 year ago

Hi, I can run deploy.py using docker env now. Thanks a lot!