justinchuby / torch-onnx

A standalone version of the next PyTorch ONNX exporter
MIT License
2 stars 1 forks source link

Sequence type not assigned correctly #62

Open justinchuby opened 3 months ago

justinchuby commented 3 months ago

PyTorch ONNX Conversion Error Report

✅ Obtain model graph with `torch.export.export`
✅ Translate the graph into ONNX
❌ Run `onnx.checker` on the ONNX model
⚪ Execute the model with ONNX Runtime
⚪ Validate model output accuracy

Error message:

Traceback (most recent call last):

  File "/Users/justinc/Documents/GitHub/torch-onnx/src/torch_onnx/_core.py", line 911, in export
    onnx.checker.check_model(onnx_program.model_proto, full_check=True)

  File "/Users/justinc/Documents/GitHub/torch-onnx/venv/lib/python3.11/site-packages/onnx/checker.py", line 179, in check_model
    C.check_model(

onnx.onnx_cpp2py_export.shape_inference.InferenceError: [ShapeInferenceError] Inference error(s): (op_type:aten_getitem, node name: node_aten_getitem_2): [ShapeInferenceError] Inferred shape and existing shape differ in rank: (3) vs (2)
(op_type:aten_getitem, node name: node_aten_getitem_4): [ShapeInferenceError] Inferred shape and existing shape differ in rank: (3) vs (2)
(op_type:aten_getitem, node name: node_aten_getitem_14): [ShapeInferenceError] Inferred shape and existing shape differ in rank: (3) vs (2)
(op_type:aten_getitem, node name: node_aten_getitem_16): [ShapeInferenceError] Inferred shape and existing shape differ in rank: (3) vs (2)
(op_type:aten_getitem, node name: node_aten_getitem_18): [ShapeInferenceError] Inferred shape and existing shape differ in rank: (3) vs (2)
(op_type:aten_getitem, node name: node_aten_getitem_20): [ShapeInferenceError] Inferred shape and existing shape differ in rank: (3) vs (2)
(op_type:aten_getitem, node name: node_aten_getitem_22): [ShapeInferenceError] Inferred shape and existing shape differ in rank: (3) vs (2)
(op_type:aten_getitem, node name: node_aten_getitem_24): [ShapeInferenceError] Inferred shape and existing shape differ in rank: (3) vs (2)
(op_type:aten_getitem, node name: node_aten_getitem_26): [ShapeInferenceError] Inferred shape and existing shape differ in rank: (3) vs (2)
(op_type:aten_getitem, node name: node_aten_getitem_28): [ShapeInferenceError] Inferred shape and existing shape differ in rank: (3) vs (2)
(op_type:aten_getitem, node name: node_aten_getitem_30): [ShapeInferenceError] Inferred shape and existing shape differ in rank: (3) vs (2)
(op_type:aten_getitem, node name: node_aten_getitem_32): [ShapeInferenceError] Inferred shape and existing shape differ in rank: (3) vs (2)
(op_type:aten_getitem, node name: node_aten_getitem_34): [ShapeInferenceError] Inferred shape and existing shape differ in rank: (3) vs (2)
(op_type:aten_stack, node name: node_aten_stack_249): [ShapeInferenceError] (op_type:ConcatFromSequence, node name: n0): input_sequence typestr: S, has unsupported type: tensor(float)
(op_type:aten_stack, node name: node_aten_stack_250): [ShapeInferenceError] (op_type:ConcatFromSequence, node name: n0): input_sequence typestr: S, has unsupported type: tensor(float)

Analysis

PyTorch ONNX Conversion Analysis

Model Information

The model has 120 parameters and 0 buffers (non-trainable parameters). Number of parameters per dtype:

defaultdict(<class 'int'>, {torch.float32: 120})

Number of buffers per dtype:

defaultdict(<class 'int'>, {})

Inputs:

Outputs:

The FX graph has 244 nodes in total. Number of FX nodes per op:

Of the call_function nodes, the counts of operators used are:

ONNX Conversion Information

All operators in the model have registered ONNX decompositions.

justinchuby commented 3 months ago
      0 |  # node_aten_unbind_0
           %"val_unbind"<?,?> ⬅️ pkg.onnxscript.torch_lib::aten_unbind(%"arg5_1") {dim=0}
      1 |  # node_Constant_1
           %"val_0"<?,?> ⬅️ ::Constant() {value=Tensor<INT64,[]>(array(0), name=None)}
      2 |  # node_aten_getitem_2
           %"val_getitem"<FLOAT,[2,3]> ⬅️ pkg.onnxscript.torch_lib::aten_getitem(%"val_unbind", %"val_0")
      3 |  # node_aten_unbind_3
           %"val_unbind_1"<?,?> ⬅️ pkg.onnxscript.torch_lib::aten_unbind(%"arg6_1") {dim=0}
      4 |  # node_aten_getitem_4
           %"val_getitem_1"<FLOAT,[2,3]> ⬅️ pkg.onnxscript.torch_lib::aten_getitem(%"val_unbind_1", %"val_0")
      5 |  # node_aten_unsqueeze_5
           %"val_unsqueeze"<FLOAT,[1,2,3]> ⬅️ pkg.onnxscript.torch_lib::aten_unsqueeze(%"val_getitem") {dim=0}
justinchuby commented 3 months ago

We need to handle both the traced sequence output and the functional single valued Sequence tensor output.