Closed smrlehdgus closed 1 year ago
Fixed in #2319
Fixed in #2319
2-2. convert to tensorrt TypeError: _scaled_dot_product_attention__tensorrt() takes from 3 to 5 positional arguments but 7 were given
this error is not fixed
python tools/deploy.py \
configs/mmocr/text-recognition/text-recognition_tensorrt_static-32x128.py \
downloads/abinet/abinet_20e_st-an_mj.py \
downloads/abinet/abinet_20e_st-an_mj_20221005_012617-ead8c139.pth \
demo/resources/text_recog.jpg \
--work-dir downloads/abinet/trt \
--device cuda
08/07 11:03:41 - mmengine - INFO - Start pipeline mmdeploy.apis.pytorch2onnx.torch2onnx in subprocess
08/07 11:03:42 - mmengine - WARNING - Failed to search registry with scope "mmocr" in the "Codebases" registry tree. As a workaround, the current "Codebases" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmocr" is a correct scope, or whether the registry is initialized.
08/07 11:03:42 - mmengine - WARNING - Failed to search registry with scope "mmocr" in the "mmocr_tasks" registry tree. As a workaround, the current "mmocr_tasks" registry in "mmdeploy" is used to build instance. This may cause unexpected failure when running the built modules. Please check whether "mmocr" is a correct scope, or whether the registry is initialized.
/home/project/mmocr/mmocr/models/textrecog/module_losses/ce_module_loss.py:101: UserWarning: padding does not exist in the dictionary
warnings.warn(
/home/project/mmocr/mmocr/models/textrecog/postprocessors/base.py:60: UserWarning: padding does not exist in the dictionary
warnings.warn(
Loads checkpoint by local backend from path: downloads/abinet/abinet_20e_st-an_mj_20221005_012617-ead8c139.pth
The model and loaded state dict do not match exactly
unexpected key in source state_dict: data_preprocessor.mean, data_preprocessor.std
08/07 11:03:42 - mmengine - WARNING - DeprecationWarning: get_onnx_config will be deprecated in the future.
08/07 11:03:42 - mmengine - INFO - Export PyTorch model to ONNX: downloads/abinet/trt/end2end.onnx.
08/07 11:03:42 - mmengine - WARNING - Can not find mmdet.models.utils.transformer.PatchMerging.forward, function rewrite will not be applied
/home/project/mmdeploy/mmdeploy/codebase/mmocr/models/text_recognition/transformer_module.py:23: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
1.0 / torch.tensor(10000).to(device).pow(
/home/project/mmdeploy/mmdeploy/codebase/mmocr/models/text_recognition/transformer_module.py:24: TracerWarning: torch.tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
torch.tensor(2 * (hid_j // 2) / d_hid)).to(device)
/home/project/mmdeploy/mmdeploy/codebase/mmocr/models/text_recognition/transformer_module.py:22: TracerWarning: torch.Tensor results are registered as constants in the trace. You can safely ignore this warning if you use this function to create tensors out of constant variables that would be the same every time you call this function. In any other case, this might cause the trace to be incorrect.
denominator = torch.Tensor([
/home/project/mmdeploy/mmdeploy/codebase/mmocr/models/text_recognition/transformer_module.py:22: TracerWarning: Converting a tensor to a Python float might cause the trace to be incorrect. We can't record the data flow of Python values, so this value will be treated as a constant in the future. This means that the trace might not generalize to other inputs!
denominator = torch.Tensor([
========== Diagnostic Run torch.onnx.export version 1.14.0a0+44dac51 ===========
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 0 ERROR ========================
Process Process-2:
Traceback (most recent call last):
File "/usr/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.8/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/project/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
ret = func(*args, **kwargs)
File "/home/project/mmdeploy/mmdeploy/apis/pytorch2onnx.py", line 98, in torch2onnx
export(
File "/home/project/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 356, in _wrap
return self.call_function(func_name_, *args, **kwargs)
File "/home/project/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function
return self.call_function_local(func_name, *args, **kwargs)
File "/home/project/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local
return pipe_caller(*args, **kwargs)
File "/home/project/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
ret = func(*args, **kwargs)
File "/home/project/mmdeploy/mmdeploy/apis/onnx/export.py", line 131, in export
torch.onnx.export(
File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 506, in export
_export(
File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 1533, in _export
graph, params_dict, torch_out = _model_to_graph(
File "/home/project/mmdeploy/mmdeploy/apis/onnx/optimizer.py", line 27, in model_to_graph__custom_optimizer
graph, params_dict, torch_out = ctx.origin_func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 1113, in _model_to_graph
graph, params, torch_out, module = _create_jit_graph(model, args)
File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 989, in _create_jit_graph
graph, torch_out = _trace_and_get_graph_from_model(model, args)
File "/usr/local/lib/python3.8/dist-packages/torch/onnx/utils.py", line 893, in _trace_and_get_graph_from_model
trace_graph, torch_out, inputs_states = torch.jit._get_trace_graph(
File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 1260, in _get_trace_graph
outs = ONNXTracedModule(f, strict, _force_outplace, return_inputs, _return_inputs_states)(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1480, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 127, in forward
graph, out = torch._C._create_graph_by_tracing(
File "/usr/local/lib/python3.8/dist-packages/torch/jit/_trace.py", line 118, in wrapper
outs.append(self.inner(*trace_inputs))
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1480, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1467, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/project/mmdeploy/mmdeploy/apis/onnx/export.py", line 123, in wrapper
return forward(*arg, **kwargs)
File "/home/project/mmdeploy/mmdeploy/codebase/mmocr/models/text_recognition/encoder_decoder_recognizer.py", line 35, in encoder_decoder_recognizer__forward
out_enc = self.encoder(feat, data_samples)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1480, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1467, in _slow_forward
result = self.forward(*input, **kwargs)
File "/home/project/mmocr/mmocr/models/textrecog/encoders/abi_encoder.py", line 81, in forward
feature = m(feature)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1480, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1467, in _slow_forward
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/mmcv/cnn/bricks/transformer.py", line 830, in forward
query = self.attentions[attn_index](
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1480, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1467, in _slow_forward
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/mmengine/utils/misc.py", line 395, in new_func
output = old_func(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/mmcv/cnn/bricks/transformer.py", line 542, in forward
out = self.attn(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1480, in _call_impl
return forward_call(*args, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/module.py", line 1467, in _slow_forward
result = self.forward(*input, **kwargs)
File "/usr/local/lib/python3.8/dist-packages/torch/nn/modules/activation.py", line 1164, in forward
attn_output, attn_output_weights = F.multi_head_attention_forward(
File "/usr/local/lib/python3.8/dist-packages/torch/nn/functional.py", line 5186, in multi_head_attention_forward
attn_output, attn_output_weights = _scaled_dot_product_attention(
TypeError: _scaled_dot_product_attention__tensorrt() takes from 3 to 5 positional arguments but 7 were given
08/07 11:03:44 - mmengine - ERROR - /home/project/mmdeploy/mmdeploy/apis/core/pipeline_manager.py - pop_mp_output - 80 - `mmdeploy.apis.pytorch2onnx.torch2onnx` with Call id: 0 failed. exit.
Checklist
Describe the bug
Attempted to convert abinet model to onnx and tensorrt using tools/deploy.py failed.
Reproduction
Error traceback