justinchuby / torch-onnx

A standalone version of the next PyTorch ONNX exporter
MIT License
2 stars 1 forks source link

Inputs are not wrapped as Symbolic tensors #37

Closed justinchuby closed 3 months ago

justinchuby commented 3 months ago

PyTorch ONNX Conversion Error Report

✅ Obtain model graph with `torch.export.export`
❌ Translate the graph into ONNX
⚪ Run `onnx.checker` on the ONNX model
⚪ Execute the model with ONNX Runtime
⚪ Validate model output accuracy

Error message:

Traceback (most recent call last):
  File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_building.py", line 453, in eval_function
    return function.function(**named_inputs, **named_attrs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/justinc/Documents/GitHub/onnxscript/onnxscript/function_libs/torch_lib/ops/nn.py", line 1166, in aten_mse_loss
    result = op.Mul(self - target, self - target)
                    ~~~~~^~~~~~~~
TypeError: unsupported operand type(s) for -: 'Input' and 'Input'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_core.py", line 398, in _handle_call_function_node_with_lowering
    outputs = onnx_function(*onnx_args, **onnx_kwargs)
              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/justinc/Documents/GitHub/onnxscript/onnxscript/values.py", line 528, in __call__
    return evaluator.default().eval_function(self, args, kwargs)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_building.py", line 462, in eval_function
    raise RuntimeError(
RuntimeError: Error calling function 'aten_mse_loss' with args (Input('arg0_1', type=Tensor(FLOAT), shape=[2,3,5], producer=None, index=None), Input('arg1_1', type=Tensor(FLOAT), shape=[2,3,5], producer=None, index=None), 0) and kwargs {}.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_core.py", line 490, in _add_nodes
    _handle_call_function_node_with_lowering(
  File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_core.py", line 400, in _handle_call_function_node_with_lowering
    raise RuntimeError(
RuntimeError: Error when calling function 'OnnxFunction(<function aten_mse_loss at 0x13f9ac180>)' with args '[Input('arg0_1', type=Tensor(FLOAT), shape=[2,3,5], producer=None, index=None), Input('arg1_1', type=Tensor(FLOAT), shape=[2,3,5], producer=None, index=None), 0]' and kwargs '{}'

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_patch.py", line 222, in _torch_onnx_export
    ir_model = torch_onnx.exported_program_to_ir(program)
               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_core.py", line 626, in exported_program_to_ir
    values = _add_nodes(exported_program, model, lower=lower, registry=registry)
             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_core.py", line 501, in _add_nodes
    raise RuntimeError(
RuntimeError: Error when translating node %mse_loss : [num_users=1] = call_function[target=torch.ops.aten.mse_loss.default](args = (%arg0_1, %arg1_1, 0), kwargs = {}). See the stack trace for more information.

Exported program:

ExportedProgram:
    class GraphModule(torch.nn.Module):
        def forward(self, arg0_1: "f32[2, 3, 5]", arg1_1: "f32[2, 3, 5]"):
            # File: /Users/justinc/Documents/GitHub/torch-onnx/tests/torch_tests/torch_onnx_test.py:8561 in forward, code: self.loss1(input, target),
            mse_loss: "f32[2, 3, 5]" = torch.ops.aten.mse_loss.default(arg0_1, arg1_1, 0)

            # File: /Users/justinc/Documents/GitHub/torch-onnx/tests/torch_tests/torch_onnx_test.py:8562 in forward, code: self.loss2(input, target),
            mse_loss_1: "f32[]" = torch.ops.aten.mse_loss.default(arg0_1, arg1_1, 2)

            # File: /Users/justinc/Documents/GitHub/torch-onnx/tests/torch_tests/torch_onnx_test.py:8563 in forward, code: self.loss3(input, target),
            mse_loss_2: "f32[]" = torch.ops.aten.mse_loss.default(arg0_1, arg1_1);  arg0_1 = arg1_1 = None
            return (mse_loss, mse_loss_1, mse_loss_2)

Graph signature: ExportGraphSignature(input_specs=[InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg0_1'), target=None, persistent=None), InputSpec(kind=<InputKind.USER_INPUT: 1>, arg=TensorArgument(name='arg1_1'), target=None, persistent=None)], output_specs=[OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='mse_loss'), target=None), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='mse_loss_1'), target=None), OutputSpec(kind=<OutputKind.USER_OUTPUT: 1>, arg=TensorArgument(name='mse_loss_2'), target=None)])
Range constraints: {}

Analysis

PyTorch ONNX Conversion Analysis

Model Information

The model has 0 parameters and 0 buffers (non-trainable parameters). Number of parameters per dtype:

defaultdict(<class 'int'>, {})

Number of buffers per dtype:

defaultdict(<class 'int'>, {})

Inputs:

Outputs:

The FX graph has 6 nodes in total. Number of FX nodes per op:

Of the call_function nodes, the counts of operators used are:

ONNX Conversion Information

All operators in the model have registered ONNX decompositions.