✅ Obtain model graph with `torch.export.export`
❌ Translate the graph into ONNX
⚪ Run `onnx.checker` on the ONNX model
⚪ Execute the model with ONNX Runtime
⚪ Validate model output accuracy
Error message:
Traceback (most recent call last):
File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_core.py", line 480, in _add_nodes
_handle_call_function_node_with_lowering(
File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_core.py", line 342, in _handle_call_function_node_with_lowering
_handle_getitem_node(node, node_name_to_values)
File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_core.py", line 237, in _handle_getitem_node
assert isinstance(
AssertionError: Expected unbind to output sequence, got %"val_unbind"<?,?>
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_patch.py", line 196, in _torch_onnx_export
ir_model = torch_onnx.exported_program_to_ir(program)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_core.py", line 619, in exported_program_to_ir
values = _add_nodes(exported_program, model, lower=lower, registry=registry)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_core.py", line 491, in _add_nodes
raise errors.OnnxConversionError(
torch_onnx.errors.OnnxConversionError: Error when translating node %getitem_1 : [num_users=1] = call_function[target=operator.getitem](args = (%unbind, 1), kwargs = {}). See the stack trace for more information.
PyTorch ONNX Conversion Error Report
Error message:
Exported program:
Analysis
PyTorch ONNX Conversion Analysis
Model Information
The model has 0 parameters and 0 buffers (non-trainable parameters). Number of parameters per dtype:
Number of buffers per dtype:
Inputs:
arg0_1
:TensorMetadata(shape=torch.Size([3, 4, 5]), dtype=torch.float32, requires_grad=False, stride=(20, 5, 1), memory_format=torch.contiguous_format, is_quantized=False, qparams={})
Outputs:
getitem_1
:TensorMetadata(shape=torch.Size([4, 5]), dtype=torch.float32, requires_grad=False, stride=(5, 1), memory_format=torch.contiguous_format, is_quantized=False, qparams={})
The FX graph has 4 nodes in total. Number of FX nodes per op:
placeholder
: 1call_function
: 2output
: 1Of the call_function nodes, the counts of operators used are:
aten.unbind.int
: 1<built-in function getitem>
: 1ONNX Conversion Information
All operators in the model have registered ONNX decompositions.