ggerganov / llama.cpp

LLM inference in C/C++
MIT License
61.35k stars 8.77k forks source link

Looking for help for using llama.cpp with Phi3 model and LoRA #7164

Closed SHIMURA0 closed 2 days ago

SHIMURA0 commented 2 months ago

Recently, I have used qLoRA to fine tune Phi3-mini-4k-instruct model, and I have save the LoRA parameters. I plan to merge the lora layer onto the original model in Ollama. I start regularly with llama.cpp, particular, I use the Python script "convert-lora-ggml.py" for transformation of LoRA parameters so that it can be used in Ollama, but I have met the following ERROR:

INFO:lora-to-gguf:model.layers.0.mlp.down_proj => blk.0.ffn_down.weight.loraA (8192, 32) float32 1.00MB INFO:lora-to-gguf:model.layers.0.mlp.down_proj => blk.0.ffn_down.weight.loraB (3072, 32) float32 0.38MB INFO:lora-to-gguf:model.layers.0.mlp.gate_up_proj => blk.0.ffn_up.weight.loraA (3072, 32) float32 0.38MB INFO:lora-to-gguf:model.layers.0.mlp.gate_up_proj => blk.0.ffn_up.weight.loraB (16384, 32) float32 2.00MB ERROR:lora-to-gguf:Error: could not map tensor name base_model.model.model.layers.0.self_attn.qkv_proj.lora_A.weight ERROR:lora-to-gguf: Note: the arch parameter must be specified if the model is not llama

(By the way, I have applied the LoRA to the layers: qkv_proj", "gate_up_proj", "down_proj" of Phi3 model)

I will be grateful if someone can give me some suggestion on solving this issue, thanks in advance!

SHIMURA0 commented 1 month ago

I find that the structure of Phi2 and Phi3 are named in different way, in Phi2, llama.cpp works fine on transforming LoRA weights to GGML (where the layer is named Wqkv) while in Phi3 this layer is named qkv_proj, I am thinking is this the problem for the failure of llama.cpp on transforming into GGML?

teaglin commented 1 month ago

Any update on this? I am running into the same issue. LoRA runs correctly with transformers, but when I convert to llama cpp. It gives me non sense output.

SHIMURA0 commented 1 month ago

hope this will be fixed as soon as possible

SHIMURA0 commented 1 month ago

The reason is that llama.cpp treats phi3 as llama architecture, i.e., splitting the merged qkv_proj into q_proj, k_proj and v_proj layers. One way posted by @Raibows at https://github.com/vllm-project/vllm/issues/4715 is to convert the tensor weight of your adapter/lora checkpoint to match it where he gives the script https://gist.github.com/Raibows/079713a060f0c49c8f3b47c227aff722.

I have tested and it is successful for transforming LoRA weights into GGML, but there is another problem that Ollama cannot integrate this GGML LoRA weights back into Phi3-instruct, I think we should somehow merge back the LoRA weights...

SHIMURA0 commented 1 month ago

anyone?

dimitristaufer commented 1 month ago

I have the same issue...

github-actions[bot] commented 2 days ago

This issue was closed because it has been inactive for 14 days since being marked as stale.