huggingface / transformers

🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
https://huggingface.co/transformers
Apache License 2.0
135.77k stars 27.18k forks source link

Stablelm-2-1_6b-chat config extracted from GGUF file differs from source model config #34426

Closed Isotr0py closed 3 weeks ago

Isotr0py commented 1 month ago

System Info

Who can help?

@SunMarc Also cc @VladOS95-cyber since you added GGUF support for StableLM :)

Information

Tasks

Reproduction

from transformers import AutoConfig

config_hf = AutoConfig.from_pretrained("stabilityai/stablelm-2-1_6b-chat")
config_gguf = AutoConfig.from_pretrained("Crataco/stablelm-2-1_6b-chat-imatrix-GGUF", gguf_file="stablelm-2-1_6b-chat.IQ4_XS.imx.gguf")
print(config_hf)
print(config_gguf)

Outputs

StableLmConfig {
  ...
  "use_qkv_bias": true,
  "vocab_size": 100352
}

StableLmConfig {
  ...
  "use_qkv_bias": false,
  "vocab_size": 100352
}

Expected behavior

The stabilityai/stablelm-2-1_6b-chat" model has use_qkv_bias=True. However, the config extracted from stablelm-2-1_6b-chat GGUF file has use_qkv_bias=False, causing model failed to initialize with qkv_proj bias.

VladOS95-cyber commented 1 month ago

Hey @Isotr0py, @SunMarc! By default, use_qkv_bias is always false, because this parameter is not specified in gguf config and there is no logic to convert it somehow in https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py. Original model can have use_qkv_bias is true, or false as well, depending on attached config to the model. So, in this case, if you want gguf model to be exactly the same as original one, you should explicitly use config with use_qkv_bias = True, at least for now.

Isotr0py commented 1 month ago

@VladOS95-cyber Thanks for explanation! I think a potential solution is to check if attn_q.bias etc present in gguf tensors, and implement it in #34450.

But I'm afraid that this will increase the CPU overhead for GGUF config extraction. WDYT?