Open justinchuby opened 3 months ago
0 | # node_aten_unbind_0
%"val_unbind"<?,?> ⬅️ pkg.onnxscript.torch_lib::aten_unbind(%"arg5_1") {dim=0}
1 | # node_Constant_1
%"val_0"<?,?> ⬅️ ::Constant() {value=Tensor<INT64,[]>(array(0), name=None)}
2 | # node_aten_getitem_2
%"val_getitem"<FLOAT,[2,3]> ⬅️ pkg.onnxscript.torch_lib::aten_getitem(%"val_unbind", %"val_0")
3 | # node_aten_unbind_3
%"val_unbind_1"<?,?> ⬅️ pkg.onnxscript.torch_lib::aten_unbind(%"arg6_1") {dim=0}
4 | # node_aten_getitem_4
%"val_getitem_1"<FLOAT,[2,3]> ⬅️ pkg.onnxscript.torch_lib::aten_getitem(%"val_unbind_1", %"val_0")
5 | # node_aten_unsqueeze_5
%"val_unsqueeze"<FLOAT,[1,2,3]> ⬅️ pkg.onnxscript.torch_lib::aten_unsqueeze(%"val_getitem") {dim=0}
We need to handle both the traced sequence output and the functional single valued Sequence tensor output.
PyTorch ONNX Conversion Error Report
Error message:
Analysis
PyTorch ONNX Conversion Analysis
Model Information
The model has 120 parameters and 0 buffers (non-trainable parameters). Number of parameters per dtype:
Number of buffers per dtype:
Inputs:
arg4_1
:TensorMetadata(shape=torch.Size([11, 2, 5]), dtype=torch.float32, requires_grad=False, stride=(10, 5, 1), memory_format=torch.contiguous_format, is_quantized=False, qparams={})
arg5_1
:TensorMetadata(shape=torch.Size([1, 2, 3]), dtype=torch.float32, requires_grad=False, stride=(6, 3, 1), memory_format=torch.contiguous_format, is_quantized=False, qparams={})
arg6_1
:TensorMetadata(shape=torch.Size([1, 2, 3]), dtype=torch.float32, requires_grad=False, stride=(6, 3, 1), memory_format=torch.contiguous_format, is_quantized=False, qparams={})
Outputs:
cat
:TensorMetadata(shape=torch.Size([11, 2, 3]), dtype=torch.float32, requires_grad=False, stride=(6, 3, 1), memory_format=torch.contiguous_format, is_quantized=False, qparams={})
stack
:TensorMetadata(shape=torch.Size([1, 1, 2, 3]), dtype=torch.float32, requires_grad=False, stride=(6, 6, 3, 1), memory_format=torch.channels_last, is_quantized=False, qparams={})
stack_1
:TensorMetadata(shape=torch.Size([1, 1, 2, 3]), dtype=torch.float32, requires_grad=False, stride=(6, 6, 3, 1), memory_format=torch.channels_last, is_quantized=False, qparams={})
The FX graph has 244 nodes in total. Number of FX nodes per op:
placeholder
: 7call_function
: 236output
: 1Of the call_function nodes, the counts of operators used are:
<built-in function getitem>
: 57aten.sigmoid.default
: 33aten.mul.Tensor
: 33aten.view.default
: 24aten.add.Tensor
: 22aten.tanh.default
: 22aten.t.default
: 12aten.addmm.default
: 12aten.split.Tensor
: 11aten.unbind.int
: 3aten.unsqueeze.default
: 2aten.squeeze.dim
: 2aten.stack.default
: 2aten.cat.default
: 1ONNX Conversion Information
All operators in the model have registered ONNX decompositions.