issues
search
intel
/
auto-round
Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
225
stars
19
forks
source link
if the whole block is excluded from the quantization, bug will occur
#141
Closed
wenhuach21
closed
3 months ago
wenhuach21
commented
3 months ago
fixed
fixed