Open Alexankharin opened 2 years ago
Hello @Alexankharin could you please share the model that you are using so we could have a closer look at the problem?
The model is under NDA, so the minimal reproducable error-causing model is attached it is generated by simple pytorch code
import torch
class MinimalExample(torch.nn.Module):
def __init__(self):
super(MinimalExample, self).__init__()
def forward(self,inpx):
toret=[]
for i in range(5):
toret.append(inpx.roll(i,-1))
return toret
model=MinimalExample()
inpx=torch.randn(1,3,256,256)
torch.onnx.export(model,
inpx,
"minimalexample.onnx",
export_params=True,
opset_version=11,
do_constant_folding=True,
input_names = ['input'],
output_names = ['output{}'.format(i) for i in range(5)])
https://drive.google.com/file/d/1O4Z5RLAd1fR6gtq6_zAAHCOIk0bNkkG2/view?usp=sharing
All of the model outputs should have the same shape, barracuda shows the wrong output shape for output0
My model includes some ooperations with tensors roll Torch model problematic code is
and it turnd OK to onnx model. Nevertheless, the shape of output is wrong in case of i equals 0. Netron onnx model exploration shows that operations for roll are slicing and following concatenation. Barracuda looks to wrongly process the slicing of [0:0], that shoud return empty tensor with slicing dimension 0, while it seems to return the tensor with the same shape as input during graph execution (see image)