intel / auto-round

Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
245 stars 20 forks source link

limit the scale minimum value not to 0 #211

Closed WeiweiZhang1 closed 2 months ago

WeiweiZhang1 commented 2 months ago

Limit the scale minimum value not to 0 tested on common models(>=10) currently, only effects Qwen2-57B-A14B-Instruct model

wenhuach21 commented 2 months ago

please also try just clippling without the if in the future