issues
search
intel
/
auto-round
Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
200
stars
19
forks
source link
refine the code and the speedup is notable
#240
Closed
wenhuach21
closed
1 week ago
wenhuach21
commented
1 week ago
on 125m, 30% speedup 7B models, 15%-20% speedup
on 125m, 30% speedup 7B models, 15%-20% speedup