open-mmlab / mmdeploy

OpenMMLab Model Deployment Framework
https://mmdeploy.readthedocs.io/en/latest/
Apache License 2.0
2.79k stars 637 forks source link

How to combine mmdet model and mmpose model for the inference #2279

Closed kdavidlp123 closed 1 year ago

kdavidlp123 commented 1 year ago

📚 The doc issue

Hi, I am a beginner of mmdeploy.

Recently I have trained my own mmdet model(faster-RCNN) and mmpose model(Res50) for my real-time webcam project. But when I used topdown_demo_with_mmdet.py on my gpu(RTX-3090), the fps was about just 7. So I read some issues on mmpose, and the author recommended the users deploying the models to increase the fps. And here are the questions:

  1. Do I need to delpoy mmdet model and mmpose model respectively?
  2. If so, then after deploying, how to combine these 2 models for the inference?

I need some help from you, can you please give me some advice or an exmaple?

Suggest a potential alternative/fix

None

RunningLeon commented 1 year ago

hi,

  1. you need to deploy two models.
  2. Or maybe you can try yolox-pose which does object detection and keypoints detection at same time.
kdavidlp123 commented 1 year ago

Hi, I have tried to convert my custom model for these days. But it still got some error.

Traceback (most recent call last):
  File "C:\Users\user\anaconda3\envs\openmmlab\lib\multiprocessing\process.py", line 315, in _bootstrap
    self.run()
  File "C:\Users\user\anaconda3\envs\openmmlab\lib\multiprocessing\process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmdeploy\apis\core\pipeline_manager.py", line 107, in __call__
    ret = func(*args, **kwargs)
  File "C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmdeploy\apis\pytorch2onnx.py", line 64, in torch2onnx
    data, model_inputs = task_processor.create_input(
  File "C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmdeploy\codebase\mmpose\deploy\pose_detection.py", line 238, in create_input
    meta_data = _get_dataset_metainfo(self.model_cfg)
  File "C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmdeploy\codebase\mmpose\deploy\pose_detection.py", line 102, in _get_dataset_metainfo
    meta = dataset_mmpose._load_metainfo(
  File "d:\mmpose\mmpose\datasets\datasets\base\base_coco_style_dataset.py", line 131, in _load_metainfo
    metainfo = parse_pose_metainfo(metainfo)
  File "d:\mmpose\mmpose\datasets\datasets\utils.py", line 108, in parse_pose_metainfo
    raise FileNotFoundError(
FileNotFoundError: The metainfo config file "configs/_base_/datasets/custom.py" does not exist.

I can successfully finish the training process. But it said it cannot find my custom configs. Here is my custom config:

dataset_info = dict(
    dataset_name='custom',
    paper_info=dict(
        author='Lin, Tsung-Yi and Maire, Michael and '
        'Belongie, Serge and Hays, James and '
        'Perona, Pietro and Ramanan, Deva and '
        r'Doll{\'a}r, Piotr and Zitnick, C Lawrence',
        title='Microsoft coco: Common objects in context',
        container='European conference on computer vision',
        year='2014',
        homepage='http://cocodataset.org/',
    ),
    keypoint_info={
        0:
        dict(name='0', id=0, color=[255, 0, 0], 
            #  type='', swap=''
             ),
        1:
        dict(
            name='1',
            id=1,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
        2:
        dict(
            name='2',
            id=2,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
        3:
        dict(
            name='3',
            id=3,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
        4:
        dict(
            name='4',
            id=4,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
        5:
        dict(
            name='5',
            id=5,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
        6:
        dict(
            name='6',
            id=6,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
        7:
        dict(
            name='7',
            id=7,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
        8:
        dict(
            name='8',
            id=8,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
        9:
        dict(
            name='9',
            id=9,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
        10:
        dict(
            name='10',
            id=10,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
        11:
        dict(
            name='11',
            id=11,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
        12:
        dict(
            name='12',
            id=12,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
        13:
        dict(
            name='13',
            id=13,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
        14:
        dict(
            name='14',
            id=14,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
        15:
        dict(
            name='15',
            id=15,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
        16:
        dict(
            name='16',
            id=16,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
        17:
        dict(
            name='17',
            id=17,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
        18:
        dict(
            name='18',
            id=18,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
        19:
        dict(
            name='19',
            id=19,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
        20:
        dict(
            name='20',
            id=20,
            color=[255, 0, 0],
            # type='',
            # swap=''
            ),
    },
    skeleton_info={
    },
    joint_weights=[
        1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.
    ],
    sigmas=[
        0.047, 0.047, 0.047, 0.047, 0.047, 0.047, 0.047, 0.047, 0.047, 0.047, 0.047, 0.047, 0.047, 0.047, 0.047, 0.047, 0.047, 0.047, 0.047, 0.047, 0.047
    ])

I only modified the cfg file from coco.py, but did not register the dataset. Is this the problem? Thank you

RunningLeon commented 1 year ago

Yes. The custom dataset classes should be registered.

kdavidlp123 commented 1 year ago

Hi, I would like to ask if this deploying process is successful or not

07/19 19:20:37 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized.
07/19 19:20:37 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "mmpose_tasks" registry tree. As a workaround, the current "mmpose_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized.
07/19 19:20:38 - mmengine - INFO - Start pipeline mmdeploy.apis.pytorch2onnx.torch2onnx in subprocess
07/19 19:20:40 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized.
07/19 19:20:40 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "mmpose_tasks" registry tree. As a workaround, the current "mmpose_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized.
Loads checkpoint by local backend from path: D:\mmpose\training_log\epoch_190.pth
C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmpose\datasets\datasets\utils.py:102: UserWarning: The metainfo config file "configs/_base_/datasets/custom.py" does not exist. A matched config file "C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmpose\.mim\configs\_base_\datasets\custom.py" will be used instead.
  warnings.warn(
07/19 19:20:42 - mmengine - WARNING - DeprecationWarning: get_onnx_config will be deprecated in the future.
07/19 19:20:42 - mmengine - INFO - Export PyTorch model to ONNX: mmdeploy_models/resnet152_gpu_384x288\end2end.onnx.
07/19 19:20:44 - mmengine - WARNING - Can not find torch._C._jit_pass_onnx_autograd_function_process, function rewrite will not be applied
07/19 19:20:44 - mmengine - WARNING - Can not find torch._C._jit_pass_onnx_deduplicate_initializers, function rewrite will not be applied
07/19 19:20:58 - mmengine - INFO - Execute onnx optimize passes.
07/19 19:20:58 - mmengine - WARNING - Can not optimize model, please build torchscipt extension.
More details: https://github.com/open-mmlab/mmdeploy/tree/1.x/docs/en/experimental/onnx_optimizer.md
07/19 19:21:00 - mmengine - INFO - Finish pipeline mmdeploy.apis.pytorch2onnx.torch2onnx
07/19 19:21:03 - mmengine - INFO - Start pipeline mmdeploy.apis.utils.utils.to_backend in subprocess
07/19 19:21:03 - mmengine - INFO - Successfully loaded tensorrt plugins from C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmdeploy\lib\mmdeploy_tensorrt_ops.dll
[07/19/2023-19:21:03] [TRT] [I] [MemUsageChange] Init CUDA: CPU +478, GPU +0, now: CPU 8983, GPU 1439 (MiB)
[07/19/2023-19:21:08] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +522, GPU +116, now: CPU 9966, GPU 1555 (MiB)
[07/19/2023-19:21:08] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
[07/19/2023-19:21:08] [TRT] [I] ----------------------------------------------------------------
[07/19/2023-19:21:08] [TRT] [I] Input filename:   mmdeploy_models/resnet152_gpu_384x288\end2end.onnx
[07/19/2023-19:21:08] [TRT] [I] ONNX IR version:  0.0.6
[07/19/2023-19:21:08] [TRT] [I] Opset version:    11
[07/19/2023-19:21:08] [TRT] [I] Producer name:    pytorch
[07/19/2023-19:21:08] [TRT] [I] Producer version: 1.9
[07/19/2023-19:21:08] [TRT] [I] Domain:
[07/19/2023-19:21:08] [TRT] [I] Model version:    0
[07/19/2023-19:21:08] [TRT] [I] Doc string:
[07/19/2023-19:21:08] [TRT] [I] ----------------------------------------------------------------
[07/19/2023-19:21:09] [TRT] [I] [MemUsageChange] Init cuBLAS/cuBLASLt: CPU +774, GPU +260, now: CPU 10688, GPU 1815 (MiB)
[07/19/2023-19:21:10] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +380, GPU +258, now: CPU 11068, GPU 2073 (MiB)
[07/19/2023-19:21:10] [TRT] [W] TensorRT was linked against cuDNN 8.6.0 but loaded cuDNN 8.0.5
[07/19/2023-19:21:10] [TRT] [I] Local timing cache in use. Profiling results in this builder pass will not be stored.
[07/19/2023-19:21:13] [TRT] [I] Some tactics do not have sufficient workspace memory to run. Increasing workspace size will enable more tactics, please check verbose output for requested sizes.
[07/19/2023-19:22:49] [TRT] [I] Total Activation Memory: 1287406592
[07/19/2023-19:22:49] [TRT] [I] Detected 1 inputs and 1 output network tensors.
[07/19/2023-19:22:49] [TRT] [I] Total Host Persistent Memory: 358224
[07/19/2023-19:22:49] [TRT] [I] Total Device Persistent Memory: 335360
[07/19/2023-19:22:49] [TRT] [I] Total Scratch Memory: 14852096
[07/19/2023-19:22:49] [TRT] [I] [MemUsageStats] Peak memory usage of TRT CPU/GPU memory allocators: CPU 72 MiB, GPU 723 MiB
[07/19/2023-19:22:49] [TRT] [I] [BlockAssignment] Started assigning block shifts. This will take 167 steps to complete.
[07/19/2023-19:22:49] [TRT] [I] [BlockAssignment] Algorithm ShiftNTopDown took 4.123ms to assign 5 blocks to 167 nodes requiring 24142336 bytes.
[07/19/2023-19:22:49] [TRT] [I] Total Activation Memory: 24142336
[07/19/2023-19:22:49] [TRT] [I] [MemUsageChange] Init cuDNN: CPU +0, GPU +8, now: CPU 11674, GPU 2629 (MiB)
[07/19/2023-19:22:49] [TRT] [W] TensorRT was linked against cuDNN 8.6.0 but loaded cuDNN 8.0.5
[07/19/2023-19:22:49] [TRT] [I] [MemUsageChange] TensorRT-managed allocation in building engine: CPU +40, GPU +327, now: CPU 40, GPU 327 (MiB)
07/19 19:22:51 - mmengine - INFO - Finish pipeline mmdeploy.apis.utils.utils.to_backend
07/19 19:22:52 - mmengine - INFO - visualize tensorrt model start.
07/19 19:22:55 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized.
07/19 19:22:55 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "mmpose_tasks" registry tree. As a workaround, the current "mmpose_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized.
07/19 19:22:55 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "backend_segmentors" registry tree. As a workaround, the current "backend_segmentors" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized.
07/19 19:22:56 - mmengine - INFO - Successfully loaded tensorrt plugins from C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmdeploy\lib\mmdeploy_tensorrt_ops.dll
07/19 19:22:56 - mmengine - INFO - Successfully loaded tensorrt plugins from C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmdeploy\lib\mmdeploy_tensorrt_ops.dll
[07/19/2023-19:22:57] [TRT] [W] TensorRT was linked against cuDNN 8.6.0 but loaded cuDNN 8.0.5
[07/19/2023-19:22:57] [TRT] [W] TensorRT was linked against cuDNN 8.6.0 but loaded cuDNN 8.0.5
[07/19/2023-19:22:57] [TRT] [W] CUDA lazy loading is not enabled. Enabling it can significantly reduce device memory usage. See `CUDA_MODULE_LOADING` in https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#env-vars
C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmpose\datasets\datasets\utils.py:102: UserWarning: The metainfo config file "configs/_base_/datasets/custom.py" does not exist. A matched config file "C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmpose\.mim\configs\_base_\datasets\custom.py" will be used instead.
  warnings.warn(
C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmpose\datasets\datasets\utils.py:102: UserWarning: The metainfo config file "configs/_base_/datasets/custom.py" does not exist. A matched config file "C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmpose\.mim\configs\_base_\datasets\custom.py" will be used instead.
  warnings.warn(
07/19 19:23:01 - mmengine - INFO - visualize tensorrt model success.
07/19 19:23:01 - mmengine - INFO - visualize pytorch model start.
07/19 19:23:04 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized.
07/19 19:23:04 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "mmpose_tasks" registry tree. As a workaround, the current "mmpose_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized.
Loads checkpoint by local backend from path: D:\mmpose\training_log\epoch_190.pth
C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmpose\datasets\datasets\utils.py:102: UserWarning: The metainfo config file "configs/_base_/datasets/custom.py" does not exist. A matched config file "C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmpose\.mim\configs\_base_\datasets\custom.py" will be used instead.
  warnings.warn(
C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmpose\datasets\datasets\utils.py:102: UserWarning: The metainfo config file "configs/_base_/datasets/custom.py" does not exist. A matched config file "C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmpose\.mim\configs\_base_\datasets\custom.py" will be used instead.
  warnings.warn(
07/19 19:23:09 - mmengine - INFO - visualize pytorch model success.
07/19 19:23:09 - mmengine - INFO - All process success.

I obtained the end2end.engine but still wonder this line07/19 19:20:58 - mmengine - WARNING - Can not optimize model, please build torchscipt extension. is normal or not?

BTW, I changed the cfg file and modified the input, ops, output from 256x196 to 384x288. Is this correct for modifying input size other than 256x196?

RunningLeon commented 1 year ago

Seems ok. You can double check the visualized result. BTW, you can ignore the warning of optimizing onnx graph. For the input_size, it should be consistent with input_size in your model config.

kdavidlp123 commented 1 year ago

Hi, I have converted the model to tensorRT, thank you. And I would like to ask, when I tried to convert the model to ncnn format, it encountered some errors.

(openmmlab) PS D:\mmdeploy> python tools/deploy.py configs/mmpose/pose-detection_ncnn_static-384x288.py D:\mmpose\configs\body_2d_keypoint\topdown_heatmap\coco\td-hm_res152_8xb32-210e_coco-384x288.py D:\mmpose\work_dirs\td-hm_res152_8xb32-210e_coco-384x288\epoch_50.pth D:\mmpose\tests\data\coco\test_80.png --work-dir mmdeploy_models/res152_ncnn--dump-info   Fatal Python error: init_sys_streams: can't initialize sys standard streams
Python runtime state: core initialized
Traceback (most recent call last):
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 839, in exec_module
  File "<frozen importlib._bootstrap_external>", line 934, in get_code
  File "<frozen importlib._bootstrap_external>", line 1032, in get_data
KeyboardInterrupt
Traceback (most recent call last):
  File "tools/deploy.py", line 335, in <module>
    main()
  File "tools/deploy.py", line 142, in main
    torch2ir(ir_type)(
  File "C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmdeploy\apis\core\pipeline_manager.py", line 356, in _wrap
    return self.call_function(func_name_, *args, **kwargs)
  File "C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmdeploy\apis\core\pipeline_manager.py", line 324, in call_function
    return self.get_result_sync(call_id)
  File "C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmdeploy\apis\core\pipeline_manager.py", line 304, in get_result_sync
    proc.join()
  File "C:\Users\user\anaconda3\envs\openmmlab\lib\multiprocessing\process.py", line 149, in join
    res = self._popen.wait(timeout)
  File "C:\Users\user\anaconda3\envs\openmmlab\lib\multiprocessing\popen_spawn_win32.py", line 108, in wait
    res = _winapi.WaitForSingleObject(int(self._handle), msecs)
KeyboardInterrupt
(openmmlab) PS D:\mmdeploy> python tools/deploy.py configs/mmpose/pose-detection_ncnn_static-384x288.py D:\mmpose\configs\body_2d_keypoint\topdown_heatmap\coco\td-hm_res152_8xb32-210e_coco-384x288.py D:\mmpose\work_dirs\td-hm_res152_8xb32-210e_coco-384x288\epoch_50.pth D:\mmpose\tests\data\coco\test_80.png --work-dir mmdeploy_models/res152_ncnn --dump-info
07/24 18:55:17 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized.
07/24 18:55:17 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "mmpose_tasks" registry tree. As a workaround, the current "mmpose_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized.
07/24 18:55:18 - mmengine - INFO - Start pipeline mmdeploy.apis.pytorch2onnx.torch2onnx in subprocess
07/24 18:55:20 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized.
07/24 18:55:20 - mmengine - WARNING - Failed to search registry with scope "mmpose" in the "mmpose_tasks" registry tree. As a workaround, the current "mmpose_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized.
Loads checkpoint by local backend from path: D:\mmpose\work_dirs\td-hm_res152_8xb32-210e_coco-384x288\epoch_50.pth
C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmpose\datasets\datasets\utils.py:102: UserWarning: The metainfo config file "configs/_base_/datasets/custom.py" does not exist. A matched config file "C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmpose\.mim\configs\_base_\datasets\custom.py" will be used instead.
  warnings.warn(
07/24 18:55:21 - mmengine - WARNING - DeprecationWarning: get_onnx_config will be deprecated in the future.
07/24 18:55:21 - mmengine - INFO - Export PyTorch model to ONNX: mmdeploy_models/res152_ncnn\end2end.onnx.
07/24 18:55:21 - mmengine - WARNING - Can not find torch._C._jit_pass_onnx_autograd_function_process, function rewrite will not be applied
07/24 18:57:01 - mmengine - INFO - Finish pipeline mmdeploy.apis.pytorch2onnx.torch2onnx
07/24 18:57:01 - mmengine - INFO - Start pipeline mmdeploy.apis.utils.utils.to_backend in main process
07/24 18:57:01 - mmengine - ERROR - C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmdeploy\backend\ncnn\backend_manager.py - to_backend - 128 - ncnn support is not available, please make sure:
1) `mmdeploy_onnx2ncnn` existed in `PATH`
2) python import ncnn success

I already installed the ncnn, but I couldn't figure it out.

kdavidlp123 commented 1 year ago

And also, as the rtmpose project mentioned in here, the inference of rtmpose can be tested by Snapdragon 865 chip. And I would like to ask that does the model run like an app on mobile phone's chip for the inference? I am eager to test it on smart phone. May I have an quick example ? Just like the this

RunningLeon commented 1 year ago

Hi, I have converted the model to tensorRT, thank you. And I would like to ask, when I tried to convert the model to ncnn format, it encountered some errors.

(openmmlab) PS D:\mmdeploy> python tools/deploy.py configs/mmpose/pose-detection_ncnn_static-384x288.py D:\mmpose\configs\body_2d_keypoint\topdown_heatmap\coco\td-hm_res152_8xb32-210e_coco-384x288.py D:\mmpose\work_dirs\td-hm_res152_8xb32-210e_coco-384x288\epoch_50.pth D:\mmpose\tests\data\coco\test_80.png --work-dir mmdeploy_models/res152_ncnn--dump-info   Fatal Python error: init_sys_streams: can't initialize sys standard streams
Python runtime state: core initialized
Traceback (most recent call last):
  File "<frozen importlib._bootstrap>", line 991, in _find_and_load
  File "<frozen importlib._bootstrap>", line 975, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 671, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 839, in exec_module
  File "<frozen importlib._bootstrap_external>", line 934, in get_code
  File "<frozen importlib._bootstrap_external>", line 1032, in get_data
KeyboardInterrupt
Traceback (most recent call last):
  File "tools/deploy.py", line 335, in <module>
    main()
  File "tools/deploy.py", line 142, in main
    torch2ir(ir_type)(
  File "C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmdeploy\apis\core\pipeline_manager.py", line 356, in _wrap
    return self.call_function(func_name_, *args, **kwargs)
  File "C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmdeploy\apis\core\pipeline_manager.py", line 324, in call_function
    return self.get_result_sync(call_id)
  File "C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmdeploy\apis\core\pipeline_manager.py", line 304, in get_result_sync
    proc.join()
  File "C:\Users\user\anaconda3\envs\openmmlab\lib\multiprocessing\process.py", line 149, in join
    res = self._popen.wait(timeout)
  File "C:\Users\user\anaconda3\envs\openmmlab\lib\multiprocessing\popen_spawn_win32.py", line 108, in wait
    res = _winapi.WaitForSingleObject(int(self._handle), msecs)
KeyboardInterrupt
(openmmlab) PS D:\mmdeploy> python tools/deploy.py configs/mmpose/pose-detection_ncnn_static-384x288.py D:\mmpose\configs\body_2d_keypoint\topdown_heatmap\coco\td-hm_res152_8xb32-210e_coco-384x288.py D:\mmpose\work_dirs\td-hm_res152_8xb32-210e_coco-384x288\epoch_50.pth D:\mmpose\tests\data\coco\test_80.png --work-dir mmdeploy_models/res152_ncnn --dump-info
07/24 18:55:17 - mmengine - �[5m�[4m�[33mWARNING�[0m - Failed to search registry with scope "mmpose" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized.
07/24 18:55:17 - mmengine - �[5m�[4m�[33mWARNING�[0m - Failed to search registry with scope "mmpose" in the "mmpose_tasks" registry tree. As a workaround, the current "mmpose_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized.
07/24 18:55:18 - mmengine - �[4m�[97mINFO�[0m - Start pipeline mmdeploy.apis.pytorch2onnx.torch2onnx in subprocess
07/24 18:55:20 - mmengine - �[5m�[4m�[33mWARNING�[0m - Failed to search registry with scope "mmpose" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized.
07/24 18:55:20 - mmengine - �[5m�[4m�[33mWARNING�[0m - Failed to search registry with scope "mmpose" in the "mmpose_tasks" registry tree. As a workaround, the current "mmpose_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmpose" is a correct scope, or whether the registry is initialized.
Loads checkpoint by local backend from path: D:\mmpose\work_dirs\td-hm_res152_8xb32-210e_coco-384x288\epoch_50.pth
C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmpose\datasets\datasets\utils.py:102: UserWarning: The metainfo config file "configs/_base_/datasets/custom.py" does not exist. A matched config file "C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmpose\.mim\configs\_base_\datasets\custom.py" will be used instead.
  warnings.warn(
07/24 18:55:21 - mmengine - �[5m�[4m�[33mWARNING�[0m - DeprecationWarning: get_onnx_config will be deprecated in the future.
07/24 18:55:21 - mmengine - �[4m�[97mINFO�[0m - Export PyTorch model to ONNX: mmdeploy_models/res152_ncnn\end2end.onnx.
07/24 18:55:21 - mmengine - �[5m�[4m�[33mWARNING�[0m - Can not find torch._C._jit_pass_onnx_autograd_function_process, function rewrite will not be applied
07/24 18:57:01 - mmengine - �[4m�[97mINFO�[0m - Finish pipeline mmdeploy.apis.pytorch2onnx.torch2onnx
07/24 18:57:01 - mmengine - �[4m�[97mINFO�[0m - Start pipeline mmdeploy.apis.utils.utils.to_backend in main process
07/24 18:57:01 - mmengine - �[5m�[4m�[31mERROR�[0m - C:\Users\user\anaconda3\envs\openmmlab\lib\site-packages\mmdeploy\backend\ncnn\backend_manager.py - to_backend - 128 - ncnn support is not available, please make sure:
1) `mmdeploy_onnx2ncnn` existed in `PATH`
2) python import ncnn success

I already installed the ncnn, but I couldn't figure it out.

hi, after install ncnn, you have to build mmdeploy with ncnn backend.


          New-Item -Path build -ItemType Directory -Force
          cd build
          cmake ..  -A x64 -T v142 `
            -DMMDEPLOY_BUILD_SDK_PYTHON_API=ON `
            -DMMDEPLOY_BUILD_SDK=ON `
            -DMMDEPLOY_TARGET_DEVICES="cpu" `
            -DMMDEPLOY_TARGET_BACKENDS="ncnn" `
            -DMMDEPLOY_CODEBASES="all" `
            -DOpenCV_DIR="$env:OPENCV_DIR\build\x64\vc15\lib" `
            -DMMDEPLOY_BUILD_EXAMPLES=OFF `
            -Dncnn_DIR="path to ncnn" `
            -DCUDNN_DIR="$env:CUDNN_DIR"

          cmake --build . --config Release -- /m
          cmake --install . --config Release
RunningLeon commented 1 year ago

And also, as the rtmpose project mentioned in here, the inference of rtmpose can be tested by Snapdragon 865 chip. And I would like to ask that does the model run like an app on mobile phone's chip for the inference? I am eager to test it on smart phone. May I have an quick example ? Just like the this

like it https://github.com/hanrui1sensetime/PoseTracker-Android-Prototype?

github-actions[bot] commented 1 year ago

This issue is marked as stale because it has been marked as invalid or awaiting response for 7 days without any further response. It will be closed in 5 days if the stale label is not removed or if there is no further response.

github-actions[bot] commented 1 year ago

This issue is closed because it has been stale for 5 days. Please open a new issue if you have similar issues or you have any new updates now.