intel / auto-round

Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
200 stars 19 forks source link

update xpu format exporting #214

Closed WeiweiZhang1 closed 1 month ago

WeiweiZhang1 commented 1 month ago

The XPU backend has aligned its convert format with Hugging Face. auto-round should follow up on this change. Testing on XPU is done