intel / auto-round

Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
132 stars 18 forks source link

Fix asym kernel issue by following autogptq's pr #137

Closed wenhuach21 closed 1 month ago

wenhuach21 commented 1 month ago

w2g32 accuracy verified, ok mixed bits verified, ok issue: triton kernel issue for low cuda version

wenhuach21 commented 1 month ago

Credit goes to @Qubitium (https://github.com/AutoGPTQ/AutoGPTQ/pull/640) and the GPTQ community.