issues
search
intel
/
auto-round
Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
245
stars
20
forks
source link
porting sq to autoround
#199
Open
n1ck-guo
opened
3 months ago
n1ck-guo
commented
3 months ago
[ ] example
[ ] api
[ ] remove duplication process/code, increase running speed
[ ] evaluation on few models
[ ] tune alpha / exec order ?
wenhuach21
commented
3 months ago
Please wait to merge until version 0.3 is released.