intel / auto-round

Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
225 stars 19 forks source link

Add trainable equivalent transformation #146

Closed yiliu30 closed 1 month ago

yiliu30 commented 4 months ago

Resolve https://github.com/intel/auto-round/issues/134

This PR enabled the TEQ. We can use it to test the acc with fake quant.

Usage

python3 main.py --model_name facebook/opt-125m  --bits 4 --group_size 128 --enable_teq

Some TODO by later PRs: