intel / auto-round

Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
172 stars 20 forks source link

Set the default scale_dtype to FP16 #104

Closed wenhuach21 closed 3 months ago

wenhuach21 commented 4 months ago

There's no necessity to use FP32 scale for packing with the autogptq Triton backend. We can instead set FP16 scale dtype as the default. Nonetheless, it's essential to validate accuracy for some models.

wenhuach21 commented 3 months ago

aligned