microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
14.59k stars 2.92k forks source link

Is there any way to convert a qdqmodel to qlinearmodel use ort? #18511

Open Rickustc opened 11 months ago

Rickustc commented 11 months ago

Describe the issue

I have a qdqmodel which weight in operator actually be int 8 qdq is there any setting in ort.seesion that can make qdq into qlinear mode? like this: image v

To reproduce

btw it is a resnet50 :D

Urgency

No response

Platform

Linux

OS Version

latest

ONNX Runtime Installation

Built from Source

ONNX Runtime Version or Commit ID

latest

ONNX Runtime API

Python

Architecture

X64

Execution Provider

Default CPU

Execution Provider Library Version

No response

yufenglee commented 11 months ago

You can try setting SessionOption's optimized_model_filepath to the destination path and graph_optimization_level to onnxruntime.GraphOptimizationLevel.ORT_ENABLE_EXTENDED and load the model, then the optimized model will be saved to destination path. However, it is possible that not all QDQ are fused into QLinear.

Rickustc commented 11 months ago

@yufenglee thank you for reply! Is there any way to control single operator weather it is converted to QLinear op?