intel / auto-round

Advanced Quantization Algorithm for LLMs. This is official implementation of "Optimize Weight Rounding via Signed Gradient Descent for the Quantization of LLMs"
https://arxiv.org/abs/2309.05516
Apache License 2.0
200 stars 19 forks source link

add meta3.1-70B-instruct model, refine docs #231

Closed WeiweiZhang1 closed 2 weeks ago

wenhuach21 commented 2 weeks ago

before merging, please update the commit id requirement in 70B recipe