Closed lbhm closed 1 year ago
Has there been any updates regarding ONNX support?
Using ONNX in our conversions to SADL, but only for the factorizedprior model. Please check this readme
"bmshj2018-factorized" can be directly exported with PyTorch:
import torch
from compressai.zoo import models
net = models["bmshj2018-factorized"](quality=1, metric="mse", pretrained=True)
# net = cheng2020_anchor(quality=5, pretrained=True).to(device)
# Some dummy input
x = torch.randn(1, 3, 224, 224, requires_grad=True)
# Export the model
torch.onnx.export(net, # model being run
x, # model input (or a tuple for multiple inputs)
"model.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
opset_version=11, # the ONNX version to export the model to
do_constant_folding=True, # whether to execute constant folding for optimization
input_names = ['input'], # the model's input names
output_names = ['output'], # the model's output names
dynamic_axes={'input': {0 : 'batch_size'}, # variable length axes
'output': {0 : 'batch_size'}}
)
onnx_model = onnx.load("model.onnx")
onnx_model_graph = onnx_model.graph
onnx_session = onnxruntime.InferenceSession(onnx_model.SerializeToString())
# onnx_session = onnxruntime.InferenceSession("cheng2020.onnx")
input_shape = (1, 3, 224, 224)
x = torch.randn(input_shape).numpy()
input_names = ["input"]
output_names = ["output"]
onnx_output = onnx_session.run(output_names, {input_names[0]: x})[0]
But error occurs when exporting Cheng2020
model.
Error message:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
[<ipython-input-48-bddb317a9b45>](https://localhost:8080/#) in <cell line: 12>()
10
11 # Export the model
---> 12 torch.onnx.export(net, # model being run
13 x, # model input (or a tuple for multiple inputs)
14 "cheng2020.onnx", # where to save the model (can be a file or file-like object)
15 frames
[/usr/local/lib/python3.10/dist-packages/compressai/models/google.py](https://localhost:8080/#) in forward(self, x)
543 ctx_params = self.context_prediction(y_hat)
544 gaussian_params = self.entropy_parameters(
--> 545 torch.cat((params, ctx_params), dim=1)
546 )
547 scales_hat, means_hat = gaussian_params.chunk(2, 1)
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 16 but got size 14 for tensor number 1 in the list.
@fracape @lbhm
Feature
Enable CompressAI models to be exportable to the ONNX format.
Motivation
I would like to use some of the CompressAI models in a third-party inference framework which allows models to be imported from ONNX files. However, the models currently do not support ONNX export in my tests.
Therefore, I'd like to ask: Is it generally possible to rewrite the CompressAI models to support ONNX export? I just started reading into the ONNX standard so my understanding might be incomplete. Possible issues that I came up with so far are:
Any help/feedback is appreciated!
Additional context
What I tried so far:
The above code fails with