microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
13.92k stars 2.81k forks source link

onnxruntime quantization weights not tied #21277

Open inisis opened 1 month ago

inisis commented 1 month ago

Describe the issue

I have a model with tied weights, but after quantization, one branch is replaced with quantized weights, but another still remains the float weights.

image

1720408498654

1720408409892

To reproduce

from onnxruntime.quantization import quantize_dynamic, QuantType

model_fp32 = './decoder_model_merged_slim.onnx'
model_quant = './decoder_model_merged_slim_quantized.onnx'
quantized_model = quantize_dynamic(
    model_input=model_fp32,
    model_output=model_quant,
    weight_type=QuantType.QInt8,
    extra_options={'EnableSubgraph': True},
    per_channel=False,
    reduce_range=False,
)

Urgency

No response

Platform

Linux

OS Version

ubuntu 2004

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.18.1

ONNX Runtime API

Python

Architecture

X64

Execution Provider

Default CPU

Execution Provider Library Version

No response

yufenglee commented 1 month ago

You can try running the quantization preprocess and then call the quantization script. It should resolve the issue: https://github.com/microsoft/onnxruntime/blob/4c3c809bdbcde4ea96f0a31a242ca6877a10c40a/onnxruntime/python/tools/quantization/preprocess.py#L21

inisis commented 1 month ago
python -m onnxruntime.quantization.preprocess --input decoder_model_merged_slim.onnx --output decoder_model_merged_slim_op.onnx

by using preprocess, it raised an error

Traceback (most recent call last):
  File "/root/miniconda3/lib/python3.9/runpy.py", line 197, in _run_module_as_main
    return _run_code(code, main_globals, None,
  File "/root/miniconda3/lib/python3.9/runpy.py", line 87, in _run_code
    exec(code, run_globals)
  File "/root/miniconda3/lib/python3.9/site-packages/onnxruntime/quantization/preprocess.py", line 127, in <module>
    quant_pre_process(
  File "/root/miniconda3/lib/python3.9/site-packages/onnxruntime/quantization/shape_inference.py", line 81, in quant_pre_process
    model = SymbolicShapeInference.infer_shapes(
  File "/root/miniconda3/lib/python3.9/site-packages/onnxruntime/tools/symbolic_shape_infer.py", line 2908, in infer_shapes
    all_shapes_inferred = symbolic_shape_inference._infer_impl()
  File "/root/miniconda3/lib/python3.9/site-packages/onnxruntime/tools/symbolic_shape_infer.py", line 2672, in _infer_impl
    self.dispatcher_[node.op_type](node)
  File "/root/miniconda3/lib/python3.9/site-packages/onnxruntime/tools/symbolic_shape_infer.py", line 1187, in _infer_If
    self._fuse_tensor_type(node, i_out, vi.type, subgraph.output[i_out].type)
  File "/root/miniconda3/lib/python3.9/site-packages/onnxruntime/tools/symbolic_shape_infer.py", line 804, in _fuse_tensor_type
    dst_type.sequence_type.elem_type.tensor_type if is_sequence(dst_type) else dst_type.tensor_type
  File "/root/miniconda3/lib/python3.9/site-packages/onnxruntime/tools/symbolic_shape_infer.py", line 32, in is_sequence
    assert cls_type in ["tensor_type", "sequence_type"]
AssertionError
inisis commented 1 month ago

Can you please check this @tianleiwu

tianleiwu commented 1 month ago

@yufenglee, please look at the quantization tool issue.

yufenglee commented 1 month ago

The symbolic_shape_infer fails. You can disable the shape inference with option: --skip_symbolic_shape

inisis commented 1 month ago
python -m onnxruntime.quantization.preprocess --input decoder_model_merged_slim.onnx --output decoder_model_merged_slim_op.onnx --skip_symbolic_shape True

after using this, the model size increase from 113MB to 265MB, this is not expected.