intel / auto-round

Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
132 stars 18 forks source link

Qbits lm-eval incorrect behaviour #152

Closed wenhuach21 closed 1 month ago

wenhuach21 commented 1 month ago

image

wenhuach21 commented 1 month ago

@zhewang1-intc

wenhuach21 commented 1 month ago

2024-06-08 10:02:19 WARNING qlinear_qbits.py L27: qlinear_qbits should be used with Intel Extension for Transformers. this warning should only activate when using cpu,

wenhuach21 commented 1 month ago

fixed