microsoft / onnxruntime

ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
https://onnxruntime.ai
MIT License
13.88k stars 2.8k forks source link

Error in quantize vicuna-7b model from fp16 to int8 #20867

Open JackWeiw opened 2 months ago

JackWeiw commented 2 months ago

Describe the issue

use shape_inference.quant_pre_process to preprocess will result in error even if i set skip_optimization=True image

after that, i use quantize_dynamic, it successfully quantize the model to int8, but it fails to load it back image

To reproduce

image image

Urgency

Urgent, paper diliver deadline is coming !

Platform

Linux

OS Version

ubuntu22.04

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.17

ONNX Runtime API

Python

Architecture

X64

Execution Provider

CUDA

Execution Provider Library Version

CUDA11.8

xadupre commented 2 months ago

Did you try to see if it works with onnxruntime==1.18?

JackWeiw commented 2 months ago

Did you try to see if it works with onnxruntime==1.18?

I switch to onnxruntime==1.18, there it still return the same error when i try to pre-process image if i simply use quantize_dynamic, it works fine, but it fails to check_model image I set op_version as default(14) when export from PyTorch, my torch version is torch2.3-cu11.8. Do you have any insights?

xadupre commented 2 months ago

Are you using the latest onnx package?

JackWeiw commented 2 months ago

Are you using the latest onnx package?

I have updated onnx to 1.16.1, onnxruntime to 1.18.0, than it succeed in quantization image howerver, when i tried to run it in onnxruntime, it report image

github-actions[bot] commented 1 month ago

This issue has been automatically marked as stale due to inactivity and will be closed in 30 days if no further activity occurs. If further support is needed, please provide an update and/or more details.