Aaronhuang-778 / BiLLM

(ICML 2024) BiLLM: Pushing the Limit of Post-Training Quantization for LLMs
https://arxiv.org/abs/2402.04291
MIT License
155 stars 12 forks source link

BiLLM: Pushing the Limit of Post-Training Quantization for LLMs [PDF]

intuition

1The University of Hong Kong 2 Beihang University 3ETH Zürich

intuition

Abstract

Pretrained large language models (LLMs) exhibit exceptional general language processing capabilities but come with significant demands on memory and computational resources. As a powerful compression technology, binarization can extremely reduce model weights to a mere 1 bit, lowering the expensive computation and memory requirements. However, existing quantization techniques fall short of maintaining LLM performance under ultra-low bit-widths. In response to this challenge, we present BiLLM, a groundbreaking 1-bit post-training quantization scheme tailored for pretrained LLMs. Based on the weight distribution of LLMs, BiLLM first identifies and structurally selects salient weights, and minimizes the compression loss through an effective binary residual approximation strategy. Moreover, considering the bell-shaped distribution of the non-salient weights, we propose an optimal splitting search to group and binarize them accurately. BiLLM achieving for the first time high-accuracy inference (e.g. 8.41 perplexity on LLaMA2-70B) with only 1.08-bit weights across various LLMs families and evaluation metrics, outperforms SOTA quantization methods of LLM by significant margins. Moreover, BiLLM enables the binarization process of the LLM with 7 billion weights within 0.5 hours on a single GPU, demonstrating satisfactory time efficiency.

News

Dependencies

All binarization processes and experiments were run on a single 80GB NVIDIA A100. However, all the process can also be conducted on a single 24GB NVIDIA 3090 Ti when the model's parameter is under 70B.

LLMs Binarization

Binarization for OPT families

python3 run.py facebook/opt-6.7b c4 braq --blocksize 128 --salient_metric hessian

Binarization for LLaMA families

python3 run.py meta-llama/Llama-2-7b-hf c4 braq --blocksize 128 --salient_metric hessian

or

python3 run.py huggyllama/llama-7b c4 braq --blocksize 128 --salient_metric hessian

Binarization for Vicuna families (Instruction Fine-tuning Models)

python3 run.py lmsys/vicuna-7b-v1.5 c4 braq --blocksize 128 --salient_metric hessian

Results

intuition

intuition

intuition

intuition

intuition

Related Project

GPTQ: Accurate Post-training Compression for Generative Pretrained Transformers

PB-LLM: Partially Binarized Large Language Models

AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration

Citation

If you find BiLLM is useful and helpful to your work, please kindly cite this paper:

@article{huang2024billm,
  title={BiLLM: Pushing the Limit of Post-Training Quantization for LLMs},
  author={Huang, Wei and Liu, Yangdong and Qin, Haotong and Li, Ying and Zhang, Shiming and Liu, Xianglong and Magno, Michele and Qi, Xiaojuan},
  journal={arXiv preprint arXiv:2402.04291},
  year={2024}
}