✅ Obtain model graph with `torch.export.export`
❌ Translate the graph into ONNX
⚪ Run `onnx.checker` on the ONNX model
⚪ Execute the model with ONNX Runtime
⚪ Validate model output accuracy
Error message:
Traceback (most recent call last):
File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_core.py", line 490, in _add_nodes
_handle_call_function_node_with_lowering(
File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_core.py", line 422, in _handle_call_function_node_with_lowering
_set_shape_types(outputs, node.meta["val"], complex_to_float=True)
File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_core.py", line 109, in _set_shape_types
for value, meta_val in zip(values, meta_vals):
^^^^^^^^^^^^^^^^^^^^^^
File "/Users/justinc/Documents/GitHub/torch-onnx/venv/lib/python3.11/site-packages/torch/_tensor.py", line 1047, in __iter__
raise TypeError("iteration over a 0-d tensor")
TypeError: iteration over a 0-d tensor
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_patch.py", line 222, in _torch_onnx_export
ir_model = torch_onnx.exported_program_to_ir(program)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_core.py", line 626, in exported_program_to_ir
values = _add_nodes(exported_program, model, lower=lower, registry=registry)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_core.py", line 501, in _add_nodes
raise RuntimeError(
RuntimeError: Error when translating node %zeros : [num_users=1] = call_function[target=torch.ops.aten.zeros.default](args = ([],), kwargs = {device: cpu, pin_memory: False}). See the stack trace for more information.
Exported program:
ExportedProgram:
class GraphModule(torch.nn.Module):
def forward(self, arg0_1: "f32[]"):
# File: /Users/justinc/Documents/GitHub/torch-onnx/tests/torch_tests/torch_onnx_test.py:3217 in forward, code: y = torch.zeros(())
zeros: "f32[]" = torch.ops.aten.zeros.default([], device = device(type='cpu'), pin_memory = False)
# File: /Users/justinc/Documents/GitHub/torch-onnx/tests/torch_tests/torch_onnx_test.py:3218 in forward, code: y += x
add: "f32[]" = torch.ops.aten.add.Tensor(zeros, arg0_1); zeros = arg0_1 = None
return (add,)
Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='add'), target=None)])
Range constraints: {}
Analysis
PyTorch ONNX Conversion Analysis
Model Information
The model has 0 parameters and 0 buffers (non-trainable parameters).
Number of parameters per dtype:
PyTorch ONNX Conversion Error Report
Error message:
Exported program:
Analysis
PyTorch ONNX Conversion Analysis
Model Information
The model has 0 parameters and 0 buffers (non-trainable parameters). Number of parameters per dtype:
Number of buffers per dtype:
Inputs:
arg0_1
:TensorMetadata(shape=torch.Size([]), dtype=torch.float32, requires_grad=False, stride=(), memory_format=torch.contiguous_format, is_quantized=False, qparams={})
Outputs:
add
:TensorMetadata(shape=torch.Size([]), dtype=torch.float32, requires_grad=False, stride=(), memory_format=torch.contiguous_format, is_quantized=False, qparams={})
The FX graph has 4 nodes in total. Number of FX nodes per op:
placeholder
: 1call_function
: 2output
: 1Of the call_function nodes, the counts of operators used are:
aten.zeros.default
: 1aten.add.Tensor
: 1ONNX Conversion Information
All operators in the model have registered ONNX decompositions.