Closed Isotr0py closed 3 weeks ago
Hey @Isotr0py, @SunMarc! By default, use_qkv_bias is always false, because this parameter is not specified in gguf config and there is no logic to convert it somehow in https://github.com/ggerganov/llama.cpp/blob/master/convert_hf_to_gguf.py. Original model can have use_qkv_bias is true, or false as well, depending on attached config to the model. So, in this case, if you want gguf model to be exactly the same as original one, you should explicitly use config with use_qkv_bias = True, at least for now.
@VladOS95-cyber Thanks for explanation! I think a potential solution is to check if attn_q.bias
etc present in gguf tensors, and implement it in #34450.
But I'm afraid that this will increase the CPU overhead for GGUF config extraction. WDYT?
System Info
transformers
version: 4.46.0Who can help?
@SunMarc Also cc @VladOS95-cyber since you added GGUF support for StableLM :)
Information
Tasks
examples
folder (such as GLUE/SQuAD, ...)Reproduction
Outputs
Expected behavior
The
stabilityai/stablelm-2-1_6b-chat"
model hasuse_qkv_bias=True
. However, the config extracted fromstablelm-2-1_6b-chat
GGUF file hasuse_qkv_bias=False
, causing model failed to initialize with qkv_proj bias.