intel / auto-round

Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
132 stars 18 forks source link

[Large impact]set the default nsamples to 128 and low_gpu_mem_usage to False #174

Closed wenhuach21 closed 2 weeks ago