microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.65k stars 2.93k forks source link

[ONNXRuntimeError] Load model from *** failed: Unsuported type proto value case #11889

Open cyr0930 opened 2 years ago

cyr0930 commented 2 years ago

Is your feature request related to a problem? Please describe. After I exported some model to ONNX format, I call onnxruntime.InferenceSession([onnx_file_path]) to check it works fine. However, it gives me an error that looks like below -- onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from [onnx_file_path] failed:Unsuported type proto value case. What does it mean? I can't figure out what's the problem, and there's no proper document about this.

System information

Describe the solution you'd like More precise log or documentation about above error.

ytaous commented 2 years ago

Do u have steps to repro and full stack?

mreso commented 1 year ago

Hi @ytaous,

I am not the original poster so I don't know if this is the same cause but I stumbled over the same error message looking into a different issue.

My env is: Pytorch 1.13.1 ORT 1.14.1 Let me know if you need more info.

Repro:

from typing import Tuple, List 
import torch
import torch.nn as nn
import onnxruntime as ort

class Foo(nn.Module):
    def __init__(self):
        super().__init__()

        self.fc_1 = nn.Linear(1, 1)
        self.fc_2 = nn.Linear(1, 1)

    def forward(self, x : List[Tuple[torch.Tensor]]) -> torch.Tensor:
        return self.fc_1(x[0][0]) + self.fc_2(x[1][0])

model = Foo()

dummy_input = (torch.rand(1),), (torch.rand(1),)
print(dummy_input)
print(model(dummy_input))

scripted_model = torch.jit.script(model, example_inputs=dummy_input)

input_names = [ "actual_input_1" ] + [ "learned_%d" % i for i in range(4) ]
output_names = [ "output1" ]

torch.onnx.export(scripted_model, [dummy_input], "model.onnx", verbose=True, input_names=input_names, output_names=output_names)

ort_session = ort.InferenceSession("model.onnx")

Output:

((tensor([0.3086]),), (tensor([0.6852]),))
tensor([-0.6065], grad_fn=<AddBackward0>)
/Users/mreso/miniconda3/envs/onnx/lib/python3.9/site-packages/torch/jit/_script.py:1280: UserWarning: Warning: monkeytype is not installed. Please install https://github.com/Instagram/MonkeyType to enable Profile-Directed Typing in TorchScript. Refer to https://github.com/Instagram/MonkeyType/blob/master/README.rst to install MonkeyType.
  warnings.warn("Warning: monkeytype is not installed. Please install https://github.com/Instagram/MonkeyType "
Exported graph: graph(%actual_input_1 : (Float(1, strides=[1], requires_grad=0, device=cpu))[],
      %learned_1 : Float(1, 1, strides=[1, 1], requires_grad=0, device=cpu),
      %learned_3 : Float(1, 1, strides=[1, 1], requires_grad=0, device=cpu),
      %onnx::MatMul_16 : Float(1, 1, strides=[1, 1], requires_grad=0, device=cpu),
      %onnx::MatMul_17 : Float(1, 1, strides=[1, 1], requires_grad=0, device=cpu)):
  %/Constant_output_0 : Long(device=cpu) = onnx::Constant[value={0}, onnx_name="/Constant"](), scope: Foo:: # /Users/mreso/Projects/playground/onnx/export.py:16:25
  %/Gather_output_0 : Tensor = onnx::Gather[axis=0, onnx_name="/Gather"](%actual_input_1, %/Constant_output_0), scope: Foo:: # /Users/mreso/Projects/playground/onnx/export.py:16:25
  %/fc_1/MatMul_output_0 : Tensor = onnx::MatMul[onnx_name="/fc_1/MatMul"](%/Gather_output_0, %onnx::MatMul_16), scope: Foo::/torch.nn.modules.linear.Linear::fc_1 # /Users/mreso/miniconda3/envs/onnx/lib/python3.9/site-packages/torch/nn/modules/linear.py:114:15
  %/fc_1/Add_output_0 : FloatTensor(device=cpu) = onnx::Add[onnx_name="/fc_1/Add"](%learned_1, %/fc_1/MatMul_output_0), scope: Foo::/torch.nn.modules.linear.Linear::fc_1 # /Users/mreso/miniconda3/envs/onnx/lib/python3.9/site-packages/torch/nn/modules/linear.py:114:15
  %/Constant_1_output_0 : Long(device=cpu) = onnx::Constant[value={1}, onnx_name="/Constant_1"](), scope: Foo:: # /Users/mreso/Projects/playground/onnx/export.py:16:46
  %/Gather_1_output_0 : Tensor = onnx::Gather[axis=0, onnx_name="/Gather_1"](%actual_input_1, %/Constant_1_output_0), scope: Foo:: # /Users/mreso/Projects/playground/onnx/export.py:16:46
  %/fc_2/MatMul_output_0 : Tensor = onnx::MatMul[onnx_name="/fc_2/MatMul"](%/Gather_1_output_0, %onnx::MatMul_17), scope: Foo::/torch.nn.modules.linear.Linear::fc_2 # /Users/mreso/miniconda3/envs/onnx/lib/python3.9/site-packages/torch/nn/modules/linear.py:114:15
  %/fc_2/Add_output_0 : FloatTensor(device=cpu) = onnx::Add[onnx_name="/fc_2/Add"](%learned_3, %/fc_2/MatMul_output_0), scope: Foo::/torch.nn.modules.linear.Linear::fc_2 # /Users/mreso/miniconda3/envs/onnx/lib/python3.9/site-packages/torch/nn/modules/linear.py:114:15
  %output1 : Float(*, strides=[1], requires_grad=1, device=cpu) = onnx::Add[onnx_name="/Add"](%/fc_1/Add_output_0, %/fc_2/Add_output_0), scope: Foo:: # /Users/mreso/Projects/playground/onnx/export.py:16:15
  return (%output1)

Traceback (most recent call last):
  File "/Users/mreso/Projects/playground/onnx/export.py", line 32, in <module>
    ort_session = ort.InferenceSession("model.onnx")
  File "/Users/mreso/miniconda3/envs/onnx/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 360, in __init__
    self._create_inference_session(providers, provider_options, disabled_optimizers)
  File "/Users/mreso/miniconda3/envs/onnx/lib/python3.9/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py", line 397, in _create_inference_session
    sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from model.onnx failed:Unsuported type proto value case.
mitchelldehaven commented 1 year ago

I'm running into this same issue, albeit in a slightly more complicated case of a custom operator. I'm able to reproduce @mreso 's example as well with the same pytorch and onnxruntime versions.

bardout commented 4 months ago

Still New after 2 years, no solution?. I had a similar case on mini exemple, with code below, based on the tutorial: https://onnx.ai/sklearn-onnx/auto_tutorial/plot_abegin_convert_pipeline.html install latest packages on python3.11.7

from onnxruntime import InferenceSession
from sklearn.datasets import load_diabetes
from skl2onnx import to_onnx
from sklearn.model_selection import train_test_split
X, y = load_diabetes(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y)
ereg = train_classifiers(X_train, y_train)
onx = to_onnx(self.ereg, X_train[:1].astype(numpy.float32), target_opset=12) 
oinf = ReferenceEvaluator(onx)

File "/opt/venv/lib/python3.10/site-packages/onnx/reference/reference_evaluator.py", line 261, in init raise TypeError(f"Unexpected type {type(proto)} for proto.")