issues
search
intel
/
auto-round
Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
245
stars
20
forks
source link
[WIP] hadamard support
#222
Open
wenhuach21
opened
2 months ago
wenhuach21
commented
2 months ago
negative impact with AutoRound in W2 and W4A4 MXFP4 for llama3
negative impact with AutoRound in W2 and W4A4 MXFP4 for llama3