Closed gufett0 closed 3 months ago
sick ty for the issue. Will take a look
you can use opset >= 11 btw !
@gufett0 can you check if this problem occurs with latest on main ?
@gufett0 can't you provide a script that e2e generates the onnx. Rn I need to cobble a bunch of the above together and don't quite have the time to
@alexander-camuto sorry for the late reply. I can confirm that the problem arises only for opset <= 12. You can find here a single notebook to reproduce e2e reproducible.zip
I tried to generate settings based on this onnx export, where
def resize_activations(activations, size): resized_activations = [] for i in range(activations.size(1)): resized_activation = transforms.functional.resize(activations[:, i, :, :].unsqueeze(1), size).squeeze(1) resized_activations.append(resized_activation) return torch.stack(resized_activations, dim=1)
activations = mymodel.activations[0]
resized_activations = resize_activations(activations, (112, 112))
torch.onnx.export( mymodel, (img1, img2, resized_activations), model_path, export_params=True, opset_version=10, do_constant_folding=True, input_names=['img1', 'img2'], output_names=['cam'], dynamic_axes={'img1': {0: 'batch_size'}, 'img2': {0: 'batch_size'}, 'cam': {0: 'batch_size'}} )
and got
To reproduce the behaviour you can find the class that extends nn.Module (you could use any pytorch neural network), the onnx file and test inputs here files.zip
related issue
I also tried to use a different class with F.interpolate in it, but in that case opset_version >= 11 is required