Closed hebin110400 closed 1 year ago
I found that you have used detection_tensorrt-int8_static-320x320.py
as config, so what is your tensorrt version? With tensorrt version given, I can help you to reproduce the bug.
I tried almost the same, converting RTMDet_tiny to .onnx to deploy it on aarch64 target. I used a deploy config like in configs/mmdet/detection/detection_onnxruntime_static.py
with a ObjectDetection Task. And my error is the same.
python 3.8.10 onnx 1.13.0 cuda-python 12.1.0rc1+1.g9e30ea2.dirty torch 1.14.0a0+44dac51 torch-tensorrt 1.4.0.dev0 torchvision 0.15.0a0 mmdet: 3.0.0rc6, mmengine: 0.6.0, mmcv: 2.0.0rc4, mmrazor: 1.0.0rc0, mmdeploy: 1.0.0rc3
I tried with mmdeploy dev-1.x and also try with 1.x
Process Process-2: Traceback (most recent call last): File "/opt/conda/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/opt/conda/lib/python3.9/multiprocessing/process.py", line 108, in run self._target(*self._args, self._kwargs) File "/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in call ret = func(*args, *kwargs) File "/mmdeploy/mmdeploy/apis/pytorch2onnx.py", line 98, in torch2onnx export( File "/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 356, in _wrap return self.call_function(funcname, args, kwargs) File "/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function return self.call_function_local(func_name, *args, kwargs) File "/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local return pipe_caller(*args, *kwargs) File "/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in call ret = func(args, kwargs) File "/mmdeploy/mmdeploy/apis/onnx/export.py", line 131, in export torch.onnx.export( File "/opt/conda/lib/python3.9/site-packages/torch/onnx/utils.py", line 504, in export _export( File "/opt/conda/lib/python3.9/site-packages/torch/onnx/utils.py", line 1529, in _export graph, params_dict, torch_out = _model_to_graph( File "/mmdeploy/mmdeploy/apis/onnx/optimizer.py", line 11, in model_to_graphcustom_optimizer graph, params_dict, torch_out = ctx.origin_func(*args, kwargs) File "/opt/conda/lib/python3.9/site-packages/torch/onnx/utils.py", line 1111, in _model_to_graph graph, params, torch_out, module = _create_jit_graph(model, args) File "/opt/conda/lib/python3.9/site-packages/torch/onnx/utils.py", line 987, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) File "/opt/conda/lib/python3.9/site-packages/torch/onnx/utils.py", line 891, in _trace_and_get_graph_from_model trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph( File "/opt/conda/lib/python3.9/site-packages/torch/jit/_trace.py", line 1184, in _get_trace_graph outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, *kwargs) File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(input, kwargs) File "/opt/conda/lib/python3.9/site-packages/torch/jit/_trace.py", line 127, in forward graph, out = torch._C._create_graph_by_tracing( File "/opt/conda/lib/python3.9/site-packages/torch/jit/_trace.py", line 118, in wrapper outs.append(self.inner(trace_inputs)) File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(input, kwargs) File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1178, in _slow_forward result = self.forward(*input, *kwargs) File "/mmdeploy/mmdeploy/apis/onnx/export.py", line 123, in wrapper return forward(arg, kwargs) File "/mmdeploy/mmdeploy/codebase/mmdet/models/detectors/single_stage.py", line 89, in single_stage_detectorforward return forward_impl(self, batch_inputs, data_samples=data_samples) File "/mmdeploy/mmdeploy/core/optimizers/function_marker.py", line 266, in g rets = f(*args, **kwargs) File "/mmdeploy/mmdeploy/codebase/mmdet/models/detectors/single_stage.py", line 24, in forward_impl output = self.bbox_head.predict(x, data_samples, rescale=False) File "/mmdetection/mmdet/models/dense_heads/base_dense_head.py", line 197, in predict predictions = self.predict_by_feat( File "/mmdeploy/mmdeploy/codebase/mmdet/models/dense_heads/rtmdet_head.py", line 91, in rtmdet_head__predict_by_feat post_params = get_post_processing_params(deploy_cfg) File "/mmdeploy/mmdeploy/codebase/mmdet/deploy/utils.py", line 27, in get_post_processing_params assert post_params is not None, 'Failed to get
post_processing
.' AssertionError: Failed to getpost_processing
. 03/15 15:13:13 - mmengine - ERROR - /mmdeploy/mmdeploy/apis/core/pipeline_manager.py - pop_mp_output - 80 -mmdeploy.apis.pytorch2onnx.torch2onnx
with Call id: 0 failed. exit
If you have some more information I'm curious.
I tried almost the same, converting RTMDet_tiny to .onnx to deploy it on aarch64 target. I used a deploy config like in
configs/mmdet/detection/detection_onnxruntime_static.py
with a ObjectDetection Task. And my error is the same.Environment:
python 3.8.10 onnx 1.13.0 cuda-python 12.1.0rc1+1.g9e30ea2.dirty torch 1.14.0a0+44dac51 torch-tensorrt 1.4.0.dev0 torchvision 0.15.0a0 mmdet: 3.0.0rc6, mmengine: 0.6.0, mmcv: 2.0.0rc4, mmrazor: 1.0.0rc0, mmdeploy: 1.0.0rc3
I tried with mmdeploy dev-1.x and also try with 1.x
Error Traceback:
Process Process-2: Traceback (most recent call last): File "/opt/conda/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/opt/conda/lib/python3.9/multiprocessing/process.py", line 108, in run self._target(*self._args, self._kwargs) File "/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in call ret = func(*args, *kwargs) File "/mmdeploy/mmdeploy/apis/pytorch2onnx.py", line 98, in torch2onnx export( File "/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 356, in _wrap return self.call_function(funcname, args, kwargs) File "/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function return self.call_function_local(func_name, *args, kwargs) File "/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local return pipe_caller(*args, kwargs) File "/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in call* ret = func(args, kwargs) File "/mmdeploy/mmdeploy/apis/onnx/export.py", line 131, in export torch.onnx.export( File "/opt/conda/lib/python3.9/site-packages/torch/onnx/utils.py", line 504, in export _export( File "/opt/conda/lib/python3.9/site-packages/torch/onnx/utils.py", line 1529, in _export graph, params_dict, torch_out = _model_to_graph( File "/mmdeploy/mmdeploy/apis/onnx/optimizer.py", line 11, in model_to_graphcustom_optimizer graph, params_dict, torch_out = ctx.origin_func(*args, kwargs) File "/opt/conda/lib/python3.9/site-packages/torch/onnx/utils.py", line 1111, in _model_to_graph graph, params, torch_out, module = _create_jit_graph(model, args) File "/opt/conda/lib/python3.9/site-packages/torch/onnx/utils.py", line 987, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) File "/opt/conda/lib/python3.9/site-packages/torch/onnx/utils.py", line 891, in _trace_and_get_graph_from_model trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph( File "/opt/conda/lib/python3.9/site-packages/torch/jit/_trace.py", line 1184, in _get_trace_graph outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, *kwargs) File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(input, kwargs) File "/opt/conda/lib/python3.9/site-packages/torch/jit/_trace.py", line 127, in forward graph, out = torch._C._create_graph_by_tracing( File "/opt/conda/lib/python3.9/site-packages/torch/jit/_trace.py", line 118, in wrapper outs.append(self.inner(trace_inputs)) File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(input, kwargs) File "/opt/conda/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1178, in _slow_forward result = self.forward(*input, *kwargs) File "/mmdeploy/mmdeploy/apis/onnx/export.py", line 123, in wrapper return forward(arg, kwargs) File "/mmdeploy/mmdeploy/codebase/mmdet/models/detectors/single_stage.py", line 89, in single_stage_detectorforward return forward_impl(self, batch_inputs, data_samples=data_samples) File "/mmdeploy/mmdeploy/core/optimizers/function_marker.py", line 266, in g rets = f(*args, **kwargs) File "/mmdeploy/mmdeploy/codebase/mmdet/models/detectors/single_stage.py", line 24, in forward_impl output = self.bbox_head.predict(x, data_samples, rescale=False) File "/mmdetection/mmdet/models/dense_heads/base_dense_head.py", line 197, in predict predictions = self.predict_by_feat( File "/mmdeploy/mmdeploy/codebase/mmdet/models/dense_heads/rtmdet_head.py", line 91, in rtmdet_head__predict_by_feat post_params = get_post_processing_params(deploy_cfg) File "/mmdeploy/mmdeploy/codebase/mmdet/deploy/utils.py", line 27, in get_post_processing_params assert post_params is not None, 'Failed to get
post_processing
.' AssertionError: Failed to getpost_processing
. 03/15 15:13:13 - mmengine - ERROR - /mmdeploy/mmdeploy/apis/core/pipeline_manager.py - pop_mp_output - 80 -mmdeploy.apis.pytorch2onnx.torch2onnx
with Call id: 0 failed. exitIf you have some more information I'm curious.
Please provide your full script.
Hello, here is my configs and script:
python tools/deploy.py \
configs/deploy_config.py.py \
configs/rtmdet/rtmdet_tiny_syncbn_fast_4xb32-1000e_custom_dataset-v2.py \
weights.pth \
test_img.jpeg \
--work-dir=results/
My model config is as the one in mmdetection/configs/rtmdet/rtmdet_tiny_8xb32-300e_coco.py
on a custom dataset with 640x480 img_scale.
My deploy_config.py was:
codebase_config = dict(type='mmdet', task='ObjectDetection' )
backend_config = dict(type='onnxruntime')
onnx_config = dict(
type='onnx',
export_params=True,
keep_initializers_as_inputs=False,
opset_version=11,
save_file='smokeDet.onnx',
input_names=['input'],
output_names=['output'],
input_shape=[640, 480],
optimize=True)
Following errors as the one above I change the codebase_config with
codebase_config = dict(type='mmdet', task='ObjectDetection',
post_processing=dict(max_output_boxes_per_class=3,
iou_threshold=.65,
score_threshold=.3,
pre_top_k=20,
keep_top_k=True)
)
but if I'm not wrong I see nothing in the documentation about how to write precisely this codebase-config. With the new config I finally get my .onnx file but an error, in the model visualization I guess, occured after:
File "/mmdeploy/mmdeploy/codebase/mmdet/deploy/object_detection_model.py", line 192, in forward batch_dets, batch_labels = outputs[:2] ValueError: not enough values to unpack (expected 2, got 1) 03/15 16:13:15 - mmengine - ERROR - /mmdeploy/tools/deploy.py - create_process - 82 - visualize onnxruntime model failed.
i think the onnx is well created. I would like to know better how to write this mmdploy config or if I miss a documentation about it.
thank you
Hello, here is my configs and script:
python tools/deploy.py \ configs/deploy_config.py.py \ configs/rtmdet/rtmdet_tiny_syncbn_fast_4xb32-1000e_custom_dataset-v2.py \ weights.pth \ test_img.jpeg \ --work-dir=results/
My model config is as the one in
mmdetection/configs/rtmdet/rtmdet_tiny_8xb32-300e_coco.py
on a custom dataset with 640x480 img_scale.My deploy_config.py was:
codebase_config = dict(type='mmdet', task='ObjectDetection' ) backend_config = dict(type='onnxruntime') onnx_config = dict( type='onnx', export_params=True, keep_initializers_as_inputs=False, opset_version=11, save_file='smokeDet.onnx', input_names=['input'], output_names=['output'], input_shape=[640, 480], optimize=True)
Following errors as the one above I change the codebase_config with
codebase_config = dict(type='mmdet', task='ObjectDetection', post_processing=dict(max_output_boxes_per_class=3, iou_threshold=.65, score_threshold=.3, pre_top_k=20, keep_top_k=True) )
but if I'm not wrong I see nothing in the documentation about how to write precisely this codebase-config. With the new config I finally get my .onnx file but an error, in the model visualization I guess, occured after:
File "/mmdeploy/mmdeploy/codebase/mmdet/deploy/object_detection_model.py", line 192, in forward batch_dets, batch_labels = outputs[:2] ValueError: not enough values to unpack (expected 2, got 1) 03/15 16:13:15 - mmengine - ERROR - /mmdeploy/tools/deploy.py - create_process - 82 - visualize onnxruntime model failed.
i think the onnx is well created. I would like to know better how to write this mmdploy config or if I miss a documentation about it.
thank you
Seems your custom model has only 1 output, while this line requires two outputs. So you need to adapt.
This issue is marked as stale because it has been marked as invalid or awaiting response for 7 days without any further response. It will be closed in 5 days if the stale label is not removed or if there is no further response.
This issue is closed because it has been stale for 5 days. Please open a new issue if you have similar issues or you have any new updates now.
Checklist
Describe the bug
使用mmdeploy-dev-1.x进行rtmdet-nano检测模型torch转onnx,遇到的问题如下: Loads checkpoint by local backend from path: mmpose-tau-rtm/pretrain_model/rtmdet_nano_8xb32-100e_coco-obj365-person-05d8511e.pth loading annotations into memory... Done (t=0.71s) creating index... index created! 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 5000/5000 [03:46<00:00, 22.06it/s]
Error in `anaconda3_39_python/bin/python': double free or corruption (!prev): 0x00007f5596e2cde0
Reproduction
1 python tools/deploy.py \ 2 configs/mmdet/detection/detection_tensorrt-int8_static-320x320.py \ 3 projects/rtmpose/rtmdet/person/rtmdet_nano_320-8xb32_coco-person.py \ 4 pretrain_model/rtmdet_nano_8xb32-100e_coco-obj365-person-05d8511e.pth \ 5 demo/resources/human-pose.jpg \ 6 --work-dir mmdeploy_models/mmdet/ort \ 7 --device cuda:0 \ 8 --show
Environment
Error traceback