Closed soltkreig closed 1 year ago
UPD:
set opset_version=12 in deploy config and I got the other error:
IndexError: index_select(): Index is supposed to be a vector
frame #1: at::native::index_select_out_cpu_(at::Tensor const&, long, at::Tensor const&, at::Tensor&) + 0x3a9 (0x7f4bb95e0739 in /home/user/conda/envs/mmaction2dev/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so)
frame #2: at::native::index_select_cpu_(at::Tensor const&, long, at::Tensor const&) + 0xe6 (0x7f4bb95e26f6 in /home/user/conda/envs/mmaction2dev/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so)
frame #3: <unknown function> + 0x1d3a4c2 (0x7f4bb9cda4c2 in /home/user/conda/envs/mmaction2dev/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so)
frame #4: at::_ops::index_select::redispatch(c10::DispatchKeySet, at::Tensor const&, long, at::Tensor const&) + 0xb9 (0x7f4bb9875649 in /home/user/conda/envs/mmaction2dev/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so)
frame #5: <unknown function> + 0x3253be3 (0x7f4bbb1f3be3 in /home/user/conda/envs/mmaction2dev/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so)
frame #6: <unknown function> + 0x3254215 (0x7f4bbb1f4215 in /home/user/conda/envs/mmaction2dev/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so)
frame #7: at::_ops::index_select::call(at::Tensor const&, long, at::Tensor const&) + 0x166 (0x7f4bb98f5296 in /home/user/conda/envs/mmaction2dev/lib/python3.9/site-packages/torch/lib/libtorch_cpu.so)
frame #8: torch::jit::onnx_constant_fold::runTorchBackendForOnnx(torch::jit::Node const*, std::vector<at::Tensor, std::allocator<at::Tensor> >&, int) + 0x1b5f (0x7f4c62fa41cf in /home/user/conda/envs/mmaction2dev/lib/python3.9/site-packages/torch/lib/libtorch_python.so)
frame #9: <unknown function> + 0xbbd6f2 (0x7f4c62feb6f2 in /home/user/conda/envs/mmaction2dev/lib/python3.9/site-packages/torch/lib/libtorch_python.so)
frame #10: torch::jit::ONNXShapeTypeInference(torch::jit::Node*, std::map<std::string, c10::IValue, std::less<std::string>, std::allocator<std::pair<std::string const, c10::IValue> > > const&, int) + 0xa8e (0x7f4c62ff0f3e in /home/user/conda/envs/mmaction2dev/lib/python3.9/site-packages/torch/lib/libtorch_python.so)
frame #11: <unknown function> + 0xbc4a44 (0x7f4c62ff2a44 in /home/user/conda/envs/mmaction2dev/lib/python3.9/site-packages/torch/lib/libtorch_python.so)
frame #12: <unknown function> + 0xb35200 (0x7f4c62f63200 in /home/user/conda/envs/mmaction2dev/lib/python3.9/site-packages/torch/lib/libtorch_python.so)
frame #13: <unknown function> + 0x2a585b (0x7f4c626d385b in /home/user/conda/envs/mmaction2dev/lib/python3.9/site-packages/torch/lib/libtorch_python.so)
<omitting python frames>
(function ComputeConstantFolding)
Traceback (most recent call last):
File "/home/jovyan/people/Murtazin/mmdeploy/tools/torch2onnx.py", line 85, in <module>
main()
File "/home/jovyan/people/Murtazin/mmdeploy/tools/torch2onnx.py", line 47, in main
torch2onnx(
File "/home/jovyan/people/Murtazin/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 356, in _wrap
return self.call_function(func_name_, *args, **kwargs)
File "/home/jovyan/people/Murtazin/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function
return self.call_function_local(func_name, *args, **kwargs)
File "/home/jovyan/people/Murtazin/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local
return pipe_caller(*args, **kwargs)
File "/home/jovyan/people/Murtazin/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
ret = func(*args, **kwargs)
File "/home/jovyan/people/Murtazin/mmdeploy/mmdeploy/apis/pytorch2onnx.py", line 98, in torch2onnx
export(
File "/home/jovyan/people/Murtazin/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 356, in _wrap
return self.call_function(func_name_, *args, **kwargs)
File "/home/jovyan/people/Murtazin/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 326, in call_function
return self.call_function_local(func_name, *args, **kwargs)
File "/home/jovyan/people/Murtazin/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 275, in call_function_local
return pipe_caller(*args, **kwargs)
File "/home/jovyan/people/Murtazin/mmdeploy/mmdeploy/apis/core/pipeline_manager.py", line 107, in __call__
ret = func(*args, **kwargs)
File "/home/jovyan/people/Murtazin/mmdeploy/mmdeploy/apis/onnx/export.py", line 131, in export
torch.onnx.export(
File "/home/user/conda/envs/mmaction2dev/lib/python3.9/site-packages/torch/onnx/__init__.py", line 316, in export
return utils.export(model, args, f, export_params, verbose, training,
File "/home/user/conda/envs/mmaction2dev/lib/python3.9/site-packages/torch/onnx/utils.py", line 107, in export
_export(model, args, f, export_params, verbose, training, input_names, output_names,
File "/home/user/conda/envs/mmaction2dev/lib/python3.9/site-packages/torch/onnx/utils.py", line 724, in _export
_model_to_graph(model, args, verbose, input_names,
File "/home/jovyan/people/Murtazin/mmdeploy/mmdeploy/apis/onnx/optimizer.py", line 11, in model_to_graph__custom_optimizer
graph, params_dict, torch_out = ctx.origin_func(*args, **kwargs)
File "/home/user/conda/envs/mmaction2dev/lib/python3.9/site-packages/torch/onnx/utils.py", line 544, in _model_to_graph
params_dict = torch._C._jit_pass_onnx_constant_fold(graph, params_dict,
IndexError: index_select(): Index is supposed to be a vecto
UPD2: I succesfully converted SWIN transformer but cannot still convert MViT.
@soltkreig Hi, thanks for your feedback.
torch.einsum
operator, we strongly suggest you replace it with normal operators in your pytorch codes. We have met same problem and used mmdeploy's rewriting feature to implicitly ovewrite it. You can refer to the example in mmseg: https://github.com/open-mmlab/mmdeploy/blob/92efd9cb7b768924bc3868e7fff81eb332e75f08/mmdeploy/codebase/mmseg/models/decode_heads/ema_head.py#L30-L45@RunningLeon Hi! Thanks for your answer. 1) I'd like to notice that I use MViT from 1.x branch of MMAction which is Multiscale ViT, not mobileViT 2) I found the solution by simly adding do_constant_folding=False at torch.onnx.export at mmdeploy/apis/onnx/export.py, It works for video_recognition_static.py:
torch.onnx.export(
patched_model,
args,
output_path,
export_params=True,
input_names=input_names,
output_names=output_names,
opset_version=opset_version,
dynamic_axes=dynamic_axes,
keep_initializers_as_inputs=keep_initializers_as_inputs,
do_constant_folding=False,
verbose=verbose)
if input_metas is not None:
patched_model.forward = model_forward
@soltkreig Hi, sorry for the misunderstanding. constant folding
is done inside pytorch. There might be some bugs on certain versions of pytorch.
@RunningLeon Thank you, I guess you may close this issue.
@RunningLeon Thank you, I guess you may close this issue.
@soltkreig Hi, good to know it. If possible, could give our project a star. That means a lot to the maintainers. Thanks in advance.
Checklist
Describe the bug
Hi, I use dev-1.x branch. I'm trying to convert model but got the error below:
I found that I can change opset version from 11 to 12 here:
mmdeploy/mmdeploy/apis/onnx/export.py
but I didn't helpReproduction
Environment
Error traceback