mit-han-lab / smoothquant

[ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models
https://arxiv.org/abs/2211.10438
MIT License
1.27k stars 150 forks source link

Why only 4 layers? #95

Open VincentXWD opened 2 months ago

VincentXWD commented 2 months ago

Hello developers, I'm inspecting smoothquant and use the script below to check the quantized model parameter sizes:

from smoothquant.opt import Int8OPTForCausalLM
from transformers.models.opt.modeling_opt import OPTForCausalLM
import torch

model_name = "mit-han-lab/opt-2.7b-smoothquant"

model_smoothquant = Int8OPTForCausalLM.from_pretrained(model_name, device_map='auto')

for name, param in model_smoothquant.named_parameters():
    print(f"Parameter Name: {name}, Parameter Shape: {param.shape}")

I noticed that there are only 4 layers collected by the inner-loop.

Parameter Name: model.decoder.embed_tokens.weight, Parameter Shape: torch.Size([50272, 2560]) Parameter Name: model.decoder.embed_positions.weight, Parameter Shape: torch.Size([2050, 2560]) Parameter Name: model.decoder.final_layer_norm.weight, Parameter Shape: torch.Size([2560]) Parameter Name: model.decoder.final_layer_norm.bias, Parameter Shape: torch.Size([2560])

Could some one explain this phenomenon? Thanks!