issues
search
intel
/
auto-round
Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
132
stars
18
forks
source link
Reminder to install auto-gptq/itrex before quantization in code/readme
#178
Closed
wenhuach21
closed
1 week ago
wenhuach21
commented
1 week ago
readd auto-gptq in requirments.txt
readd auto-gptq in requirments.txt