intel / auto-round

Advanced Quantization Algorithm for LLMs/VLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
261 stars 22 forks source link

consecutive quantization for the same model with different config bug #352

Open wenhuach21 opened 5 days ago

wenhuach21 commented 5 days ago

the layer config will be incorrect, need to be cleared in the end of the quantization