Closed AryaAftab closed 1 year ago
Hi @AryaAftab,
I would suggest you convert the following model class to .onnx instead: https://github.com/facebookresearch/co-tracker/blob/ab0ce3c97795222f528c52c53dcbb0bf95e9b785/cotracker/models/core/cotracker/cotracker.py#L69
You can try something like this:
predictor = CoTrackerPredictor(checkpoint)
#All inputs should be resized to 384x512
dummy_input = torch.randn(1, 8, 3, 384, 512, device="cuda")
# We take a video and queried points as input
input_names = ["input_video", "input_queries"]
output_names = ["output_tracks", "output_feature", "output_visib", "output_metadata"]
# Video length is also dynamic
dynamic_axes_dict = {
'input_video': {
0: 'batch_size',
1: 'video_len'
},
'input_queries': {
0: 'batch_size',
1: 'video_len'
},
}
torch.onnx.export(predictor.model,
dummy_input,
"cotracker.onnx",
verbose=False,
input_names=input_names,
output_names=output_names,
dynamic_axes=dynamic_axes_dict,
export_params=True,
)
Could you tell me if this approach works? I haven't tried to convert CoTracker to .onnx yet. It might need some refactoring.
Hi @nikitakaraevv, Thanks for your response, I tried this with several modifications, but I got some errors,
from cotracker.models.core.cotracker.cotracker import CoTracker
from cotracker.models.build_cotracker import build_cotracker_stride_4_wind_8
model = build_cotracker_stride_4_wind_8(
checkpoint=os.path.join(
'./checkpoints/cotracker_stride_4_wind_8.pth'
)
)
device = "cpu"
#All inputs should be resized to 384x512
dummy_input_1 = torch.randn(1, 8, 3, 384, 512, device=device)
dummy_input_2 = torch.randn(1, 8, 3, device=device)
# We take a video and queried points as input
input_names = ["input_video", "input_queries"]
output_names = ["output_tracks", "output_feature", "output_visib", "output_metadata"]
# Video length is also dynamic
dynamic_axes_dict = {
'input_video': {
0: 'batch_size',
1: 'video_len'
},
'input_queries': {
0: 'batch_size',
1: 'video_len'
},
}
torch.onnx.export(model,
(dummy_input_1, dummy_input_2),
"cotracker.onnx",
verbose=False,
input_names=input_names,
output_names=output_names,
dynamic_axes=dynamic_axes_dict,
export_params=True,
opset_version=16,
)
Error:
============= Diagnostic Run torch.onnx.export version 2.0.1+cu118 =============
verbose: False, log level: Level.ERROR
======================= 0 NONE 0 NOTE 0 WARNING 1 ERROR ========================
ERROR: missing-standard-symbolic-function
=========================================
Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 16 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.
None
<Set verbose=True to see more details>
---------------------------------------------------------------------------
UnsupportedOperatorError Traceback (most recent call last)
[<ipython-input-13-9985d6d41604>](https://localhost:8080/#) in <cell line: 21>()
19 }
20
---> 21 torch.onnx.export(model,
22 (dummy_input_1, dummy_input_2),
23 "cotracker.onnx",
4 frames
[/usr/local/lib/python3.10/dist-packages/torch/onnx/utils.py](https://localhost:8080/#) in _run_symbolic_function(graph, block, node, inputs, env, operator_export_type)
1899 return graph_context.op(op_name, *inputs, **attrs, outputs=node.outputsSize()) # type: ignore[attr-defined]
1900
-> 1901 raise errors.UnsupportedOperatorError(
1902 symbolic_function_name,
1903 opset_version,
UnsupportedOperatorError: Exporting the operator 'aten::scaled_dot_product_attention' to ONNX opset version 16 is not supported. Please feel free to request support or submit a pull request on PyTorch GitHub: https://github.com/pytorch/pytorch/issues.
Thank you, @AryaAftab!
The issue is discussed here: https://github.com/pytorch/pytorch/issues/97262 Is it helpful?
Hi, do you have any updates? cotracker model converts to ONNX or converts to TensorRT
Hi @dat-nguyenvn, there are no updates yet, but could you email me at nikita@robots.ox.ac.uk? I'm curious to learn more about your use case.
Did this ever work?
Hi, I want to convert the model to onnx format but I get an error, can anyone help me to solve the problem? Note: I am using Colab for model loading and converting.
Conversion code:
Error: