intel / auto-round

Advanced Quantization Algorithm for LLMs/VLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
263 stars 23 forks source link

[Low priority]add support of quarot,spinquant and DuQuant #219

Open wenhuach21 opened 3 months ago

wenhuach21 commented 3 months ago

not promising after initial testing and is not quite general to models, e.g. phi-3, mistral