Closed Freedom-JJ closed 1 year ago
We recommend using English or English & Chinese for issues so that we could have broader discussion.
我已经尝试了各种配置文件,但是都报错
Traceback (most recent call last): File "/home/a/archiconda3/envs/test/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap self.run() File "/home/a/archiconda3/envs/test/lib/python3.8/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/home/a/jiangdehong/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__ ret = func(*args, **kwargs) File "/home/a/jiangdehong/mmdeploy/mmdeploy/apis/pytorch2onnx.py", line 98, in torch2onnx export( File "/home/a/jiangdehong/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 356, in _wrap return self.call_function(func_name_, *args, **kwargs) File "/home/a/jiangdehong/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function return self.call_function_local(func_name, *args, **kwargs) File "/home/a/jiangdehong/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local return pipe_caller(*args, **kwargs) File "/home/a/jiangdehong/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__ ret = func(*args, **kwargs) File "/home/a/jiangdehong/mmdeploy/mmdeploy/apis/onnx/export.py", line 131, in export torch.onnx.export( File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/onnx/__init__.py", line 305, in export return utils.export(model, args, f, export_params, verbose, training, File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/onnx/utils.py", line 118, in export _export(model, args, f, export_params, verbose, training, input_names, output_names, File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/onnx/utils.py", line 719, in _export _model_to_graph(model, args, verbose, input_names, File "/home/a/jiangdehong/mmdeploy/mmdeploy/apis/onnx/optimizer.py", line 11, in model_to_graph__custom_optimizer graph, params_dict, torch_out = ctx.origin_func(*args, **kwargs) File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/onnx/utils.py", line 499, in _model_to_graph graph, params, torch_out, module = _create_jit_graph(model, args) File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/onnx/utils.py", line 440, in _create_jit_graph graph, torch_out = _trace_and_get_graph_from_model(model, args) File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/onnx/utils.py", line 391, in _trace_and_get_graph_from_model torch.jit._get_trace_graph(model, args, strict=False, _force_outplace=False, _return_inputs_states=True) File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/jit/_trace.py", line 1166, in _get_trace_graph outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs) File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/jit/_trace.py", line 127, in forward graph, out = torch._C._create_graph_by_tracing( File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/jit/_trace.py", line 118, in wrapper outs.append(self.inner(*trace_inputs)) File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1110, in _call_impl return forward_call(*input, **kwargs) File "/home/a/archiconda3/envs/test/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1098, in _slow_forward result = self.forward(*input, **kwargs) File "/home/a/jiangdehong/mmdeploy/mmdeploy/apis/onnx/export.py", line 123, in wrapper return forward(*arg, **kwargs) File "/home/a/jiangdehong/mmdeploy/mmdeploy/codebase/mmdet/models/detectors/single_stage.py", line 89, in single_stage_detector__forward return __forward_impl(self, batch_inputs, data_samples=data_samples) File "/home/a/jiangdehong/mmdeploy/mmdeploy/core/optimizers/function_marker.py", line 266, in g rets = f(*args, **kwargs) File "/home/a/jiangdehong/mmdeploy/mmdeploy/codebase/mmdet/models/detectors/single_stage.py", line 24, in __forward_impl output = self.bbox_head.predict(x, data_samples, rescale=False) File "/home/a/jiangdehong/mmdetection/mmdet/models/dense_heads/base_dense_head.py", line 197, in predict predictions = self.predict_by_feat( File "/home/a/jiangdehong/mmdeploy/mmdeploy/codebase/mmdet/models/dense_heads/base_dense_head.py", line 145, in base_dense_head__predict_by_feat max_scores, _ = nms_pre_score[..., :-1].max(-1) IndexError: max(): Expected reduction dim 2 to have non-zero size. 05/08 00:48:02 - mmengine - ERROR - /home/a/jiangdehong/mmdeploy/mmdeploy/apis/core/pipeline_manager.py - pop_mp_output - 80 -
mmdeploy.apis.pytorch2onnx.torch2onnxwith Call id: 0 failed. exit.
十分感谢大家
Did you find a solution to this? You marked the issue as completed, however, no solution is apparent from the thread. I am facing the same problem, so it seems to be still open.
Checklist
Describe the bug
使用deploy.py 部署mmpose/demo/mmdetection_cfg/ssdlite_mobilenetv2_scratch_600e_onehand.py 成tensorrt报错
Reproduction
Environment
Error traceback
No response