intel / auto-round

Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
172 stars 20 forks source link

fix critic bug for gradient_accumulate_steps!=1 and reduce cpu memory of lm-head tuning #97

Closed WeiweiZhang1 closed 4 months ago

WeiweiZhang1 commented 4 months ago

Resolve the issue of excessive CPU memory usage caused by saving layer outputs. Fix the issue of generating random indices when using 'rand' sampler.