microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.66k stars 2.93k forks source link

[Models larger than 2GB :(] Specify mid-graph.output after initializing InferenceSession #21367

Open AccSrd opened 3 months ago

AccSrd commented 3 months ago

Describe the issue

The output of the intermediate node cannot be obtained in onnxruntime.InferenceSession for the ONNX model whose size exceeds 2 GB. Currently, when onnxruntime python package is used, if you want to obtain the output of the middle layer node of a ONNX model, you must use model.graph.output.extend([onnx.ValueInfoProto(name=name)]) to specify the node name after onnx.load(path), and then initialize InferenceSession. However, when initializing InferenceSession by loading the onnx model class, you need to pass the result of model.SerializeToString(), which does not support Onnx models larger than 2GB :(

To reproduce

Normally, we use following codes to obtain the output of the intermediate node:

for name in nodes_outputs:
    model.graph.output.extend([onnx.ValueInfoProto(name=name)])

ort_session = onnxruntime.InferenceSession(model.SerializeToString(), providers=['CPUExecutionProvider'])
outputs = [x.name for x in ort_session.get_outputs()]
ort_outs = ort_session.run(outputs, ort_inputs)

However, if the onnx model is larger than 2GB, the SerializeToString() will raise ERROR:

ValueError: Message onnx.ModelProto exceeds maximum protobuf size of 2GB: 10276474307

Is there a possible solution to this awkward situation? Thank you very much.

Urgency

No response

Platform

Linux

OS Version

Ubuntu 9.4.0-1ubuntu1~20.04.2

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.18.1

ONNX Runtime API

Python

Architecture

X64

Execution Provider

Default CPU

Execution Provider Library Version

No response

mindest commented 3 months ago

I don't have an ONNX model to test now, but could you test if saving it first works? Like

for name in nodes_outputs:
    model.graph.output.extend([onnx.ValueInfoProto(name=name)])

output_onnx_path = "test.onnx"
onnx.save(model, output_onnx_path, save_as_external_data=True)
ort_session = onnxruntime.InferenceSession(output_onnx_path, providers=['CPUExecutionProvider'])

outputs = [x.name for x in ort_session.get_outputs()]
ort_outs = ort_session.run(outputs, ort_inputs)
AccSrd commented 3 months ago

I don't have an ONNX model to test now, but could you test if saving it first works? Like

for name in nodes_outputs:
    model.graph.output.extend([onnx.ValueInfoProto(name=name)])

output_onnx_path = "test.onnx"
onnx.save(model, output_onnx_path, save_as_external_data=True)
ort_session = onnxruntime.InferenceSession(output_onnx_path, providers=['CPUExecutionProvider'])

outputs = [x.name for x in ort_session.get_outputs()]
ort_outs = ort_session.run(outputs, ort_inputs)

Thanks for your kindly reply! I'll check it soon and give you feedback :)

github-actions[bot] commented 2 months ago

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.