intel / auto-round

Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
172 stars 20 forks source link

hook AutoHfQuantizer of transformers to support different backends and mixed precision quantization #109

Closed wenhuach21 closed 3 months ago

wenhuach21 commented 3 months ago

Feature request 1 support different kernels in different backend, including gptq/awq/itrex 2 support different bits and group_size for different layers

wenhuach21 commented 3 months ago

@n1ck-guo