intel / neural-compressor

SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
https://intel.github.io/neural-compressor/
Apache License 2.0
2.22k stars 254 forks source link

Quantized Neural compress model not generating expected results in AMD processor #1531

Open Bhuvaneswaran-R opened 9 months ago

Bhuvaneswaran-R commented 9 months ago

Hi Team,

I have converted a norma t5 small model to Onnx using onnxruntime 1.15.1, python =3.10.12 in Intel Processor and AMD processor but received different response! Please let me know how to use the same for AMD processors

Input Seq: translate English to German: The house is wonderful.

Results from Intel based processors:

T5 Small Output

Actual: Das Haus ist wunderbar.

ONNX Result: Das Haus ist wunderbar.

Quantized: Das Haus ist wunderbar.

Neural Compressor: Intel: Das Haus ist wunderbar.

Results from AMD based processors:

AMD

ONNNX: Das Haus ist wunderbar.

Quantized: Das Haus ist wunderbar.

Neural Compressor: DOMIEIE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE IE

Code used for conversion:

Quantization code with Calibration Data Loader/ Data Set

from neural_compressor import quantization from neural_compressor.config import PostTrainingQuantConfig,TuningCriterion from transformers import AutoModelForSeq2SeqLM

model="/t5-smallonnx"

conf = ( PostTrainingQuantConfig(approach="dynamic" )) # default approach is "auto", you can set "dynamic":PostTrainingQuantConfig(approach="dynamic") q_model = quantization.fit( model=model, conf=conf ) q_model.save("/t5-small-neural-compressor.onnx")

Neural compressor version: 2.4.1

chensuyue commented 9 months ago

Hi @Bhuvaneswaran-R, thank you for using the tool. We actually didn't commit to support quantize on AMD processor. But we once inferenced some quantized models on AMD CPU which is generated on Intel Processor, and the accuracy is similar. https://github.com/intel/neural-compressor/blob/master/docs/source/validated_model_list.md#validated-onnx-qdq-int8-models-on-multiple-hardware-through-onnx-runtime
So if you just want to run inference with AMD processor, you may try to quantize the model on Intel HW and run inference with AMD HW.

Bhuvaneswaran-R commented 9 months ago

Hi chensuyue, Thank you for your response! It is been quantized in Intel based HARDWARE only and inferred in AMD processor! It is working but generation is not as expected!

chensuyue commented 9 months ago

Oh, I see, thanks for your reply, I will track this issue. But fix the inference issue on AMD processor is not a high priority task for my team, we welcome the contribution from community to fix this issue.