bitsandbytes-foundation / bitsandbytes

Accessible large language models via k-bit quantization for PyTorch.
https://huggingface.co/docs/bitsandbytes/main/en/index
MIT License
6.28k stars 629 forks source link

where are the outliers stored in LLM.int8 quantization for inference suing transformers library on AMD GPU? #1320

Open vbayanag opened 2 months ago

vbayanag commented 2 months ago

Hi, I'm using BitsAndBytesConfig on HF's Transformers library to quantize facebook/opt-66B model. But when I print the dtype of weights of varoius layers, all of them turn out to be of int8.

Capture

This makes me wonder where are the outliers stored? Since LLM.int8 algorithm requires outliers to be stored in fp16/fp32 as part of matrix decomposition algo. Can someone kindly clarify?

Moreover, Figure 2 of the paper shows that FP16 inputs are converted to int8 after detecting outliers, but in our case the model is already converted/quantized.

llmint8

Xing-Zhuang commented 2 months ago

When linear8bit forward , it will find activation outlier cols and the corresponding weight matrix rows will be dequantized. You can see bitsandbytes/nn/modules for more details.

ZxAndJb commented 1 month ago

I am not sure whether I get to the point correctly. Based on my check, the whole weight metrix still needs to be squeezed to int8, including the rows containing outliers. And during forward process, the corresponding rows will be converted to float16 once the outliers are detected. So the difference is the computation of outliers need to be dequantized first and then doing metrix multiplication, but the parts without outliers will be computed in Int8 (then int32) and then doing dequantization. As mentioned in the paper, quantizing outliers will harm performance greatly. I am confused why the actual implementation still introduces this error. Maybe I am wrong, waiting someone to discuss.