issues
search
intel
/
auto-round
Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
245
stars
20
forks
source link
bugfix of groupsize dismatch with weight shape
#195
Closed
WeiweiZhang1
closed
3 months ago
WeiweiZhang1
commented
3 months ago
falcon model issue fix
falcon model issue fix