Open vbayanag opened 2 months ago
When linear8bit forward , it will find activation outlier cols and the corresponding weight matrix rows will be dequantized. You can see bitsandbytes/nn/modules for more details.
I am not sure whether I get to the point correctly. Based on my check, the whole weight metrix still needs to be squeezed to int8, including the rows containing outliers. And during forward process, the corresponding rows will be converted to float16 once the outliers are detected. So the difference is the computation of outliers need to be dequantized first and then doing metrix multiplication, but the parts without outliers will be computed in Int8 (then int32) and then doing dequantization. As mentioned in the paper, quantizing outliers will harm performance greatly. I am confused why the actual implementation still introduces this error. Maybe I am wrong, waiting someone to discuss.
Hi, I'm using
BitsAndBytesConfig
on HF's Transformers library to quantizefacebook/opt-66B
model. But when I print the dtype of weights of varoius layers, all of them turn out to be ofint8
.This makes me wonder where are the outliers stored? Since LLM.int8 algorithm requires outliers to be stored in fp16/fp32 as part of matrix decomposition algo. Can someone kindly clarify?
Moreover, Figure 2 of the paper shows that FP16 inputs are converted to int8 after detecting outliers, but in our case the model is already converted/quantized.