Open xduris1 opened 1 year ago
May try to reinstall mmdeploy
cd mmdeploy
pip install -e -v .
I tried reinstalling mmdeploy using
pip install -e -v .
on branch 1.x
, commit f69c636
version of mmdeploy is 1.0.0rc3
After reintallation I am getting this error:
[8] Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
Process Process-3:
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/opt/conda/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/data/rtmdet_conversion/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
ret = func(*args, **kwargs)
File "/data/rtmdet_conversion/mmdeploy/mmdeploy/apis/utils/utils.py", line 98, in to_backend
return backend_mgr.to_backend(
File "/data/rtmdet_conversion/mmdeploy/mmdeploy/backend/tensorrt/backend_manager.py", line 127, in to_backend
onnx2tensorrt(
File "/data/rtmdet_conversion/mmdeploy/mmdeploy/backend/tensorrt/onnx2tensorrt.py", line 79, in onnx2tensorrt
from_onnx(
File "/data/rtmdet_conversion/mmdeploy/mmdeploy/backend/tensorrt/utils.py", line 180, in from_onnx
raise RuntimeError(f'Failed to parse onnx, {error_msgs}')
RuntimeError: Failed to parse onnx, In node 383 (importFallbackPluginImporter): UNSUPPORTED_NODE: Assertion failed: creator && "Plugin not found, are the plugin name, version, and namespace correct?"
03/06 11:48:09 - mmengine - ERROR - /data/rtmdet_conversion/mmdeploy/mmdeploy/apis/core/pipeline_manager.py - pop_mp_output - 80 - `mmdeploy.apis.utils.utils.to_backend` with Call id: 1 failed. exit.
I think this is because the installation from repository does not contain plugins for Tensorrt conversion.
These are provided in the installation from
https://github.com/open-mmlab/mmdeploy/releases/download/v1.0.0rc3/mmdeploy-1.0.0rc3-linux-x86_64-cuda11.1-tensorrt8.2.3.0.tar.gz
however this is the version with the formentioned VACC issue,
I built mmdeploy from source. and install it by pip install -e .
In this way, I didn't reproduce your issue.
Are you suggesting the reproduced way as follows?
1. install mmdeploy from prebuilt package
2. do model conversion
Yes, I was following tutorial from the MMDet RTMDET config section: https://github.com/open-mmlab/mmdetection/tree/3.x/configs/rtmdet I was installing for Tensorrt conversion.
More specifically I installed everything according to Step1. Install MMDeploy, then checked that the condition from the section Deploy RTMDet Instance Segmentation Model MMDeploy >= v1.0.0rc2
is True.
I had the same problem:
WARNING: The shape inference of prim::Constant type is missing, so it may result in wrong shape inference for the exported graph. Please consider adding it in symbolic function.
03/06 22:04:09 - mmengine - INFO - Execute onnx optimize passes.
03/06 22:04:09 - mmengine - WARNING - Can not optimize model, please build torchscipt extension.
More details: https://github.com/open-mmlab/mmdeploy/tree/1.x/docs/en/experimental/onnx_optimizer.md
03/06 22:04:09 - mmengine - INFO - Finish pipeline mmdeploy.apis.pytorch2onnx.torch2onnx
Traceback (most recent call last):
File "mmdeploy/tools/deploy.py", line 335, in
but in order to deploy on cpu: python mmdeploy/tools/deploy.py \ mmdeploy/configs/mmdet/instance-seg/instance-seg_rtmdet-ins_onnxruntime_static-640x640.py \ mmdetection/configs/rtmdet/rtmdet-ins_tiny_8xb32-300e_coco.py \ checkpoints/rtmdet-ins_tiny_8xb32-300e_coco_20221130_151727-ec670f7e.pth \ mmdetection/demo/demo.jpg \ --work-dir ./work_dirs/rtmdet-ins \ --device cpu\
waitting for solution...
@flyzxm5177 I mentioned workaround In the initial post.
Possible fix:
Comment lines 229 - 253 in deploy.py until issue is resolved.
Then the script finishes successfully.
I was able to convert to tensorrt successfully after this, However I do not consider this a good fix. Hope this might help you as a temporary workaround for the time being.
It works, thank you!
@flyzxm5177 I mentioned workaround In the initial post.
Possible fix: Comment lines 229 - 253 in deploy.py until issue is resolved. Then the script finishes successfully.
I was able to convert to tensorrt successfully after this, However I do not consider this a good fix. Hope this might help you as a temporary workaround for the time being.
Checklist
Describe the bug
During running the deploy.py script with the prompt: python tools/deploy.py \ configs/mmdet/instance-seg/instance-seg_rtmdet-ins_tensorrt_static-640x640.py \ ${PATH_TO_MMDET}/configs/rtmdet/rtmdet-ins_s_8xb32-300e_coco.py \ checkpoint/rtmdet-ins_s_8xb32-300e_coco/rtmdet-ins_s_8xb32-300e_coco_20221121_212604-fdc5d7ec.pth \ demo/resources/det.jpg \ --work-dir ./work_dirs/rtmdet-ins \ --device cuda:0 \ --show the script fails with due to unhadled dependecy on VACC in enums.
Possible fix: Comment lines 229 - 253 in deploy.py until issue is resolved. Then the script finishes successfully.
Reproduction
run python tools/deploy.py \ configs/mmdet/instance-seg/instance-seg_rtmdet-ins_tensorrt_static-640x640.py \ ${PATH_TO_MMDET}/configs/rtmdet/rtmdet-ins_s_8xb32-300e_coco.py \ checkpoint/rtmdet-ins_s_8xb32-300e_coco/rtmdet-ins_s_8xb32-300e_coco_20221121_212604-fdc5d7ec.pth \ demo/resources/det.jpg \ --work-dir ./work_dirs/rtmdet-ins \ --device cuda:0 \ --show on branch 1.x of mmdeploy.
Environment
Error traceback