intel / auto-round

Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
172 stars 20 forks source link

Adjust gpu usage based on free gpu memory space #127

Closed WeiweiZhang1 closed 2 weeks ago

WeiweiZhang1 commented 3 months ago

Adaptive tuning of low_gpu_mem_usage args