mbzuai-oryx / LLaVA-pp

🔥🔥 LLaVA++: Extending LLaVA with Phi-3 and LLaMA-3 (LLaVA LLaMA-3, LLaVA Phi-3)
814 stars 61 forks source link

Finetuning with lora output never ends. #29

Open gyupro opened 6 months ago

gyupro commented 6 months ago

Hi, Thanks for your wonderful work.

I am struggling using my lora tuned model.

I conducted following steps

  1. finetuning with lora

    • Undi95/Meta-Llama-3-8B-Instruct-hf model base
    • llama3 template
  2. inference with gradio

    • run server with model-base Undi95/Meta-Llama-3-8B-Instruct-hf model-path checkpoints/LLaVA-Meta-Llama-3-8B-Instruct-lora
  3. Model output never ends. (I think something's wrong with EOS token?)

image

displaywz commented 6 months ago

same question

mmaaz60 commented 6 months ago

Hi Both,

Thanks for your interest in our work. I noticed you are using the wrong LLaMA3 base model that may have some issues with the tokenizer as reported in the earlier versions.

I would recommend using the official meta-llama/Meta-Llama-3-8B as base version as they fixed the tokenizer issue which was effecting generation. Let me know if this solves the issue.

Thanks and Good Luck

displaywz commented 6 months ago

Nice work! I am using the latest llava-llama3 model downloaded from huggingface and attempting to use it directly for Lora. When I directly use the model without Lora, I will repeatedly output the final text content on my task until the maximum length, and I suspect it may be related to EOS. In addition, when I try to use Lora, the output becomes strange and even produces some content that is not a word. Is this related to me directly using the original version of llava's finetune task_lora? I only replaced the llava-llama3 version with the dialogue template llama3 and the base model hf. Thank you again for your work. Very helpful to me :)

lzy-ps commented 4 months ago

Nice work! I am using the latest llava-llama3 model downloaded from huggingface and attempting to use it directly for Lora. When I directly use the model without Lora, I will repeatedly output the final text content on my task until the maximum length, and I suspect it may be related to EOS. In addition, when I try to use Lora, the output becomes strange and even produces some content that is not a word. Is this related to me directly using the original version of llava's finetune task_lora? I only replaced the llava-llama3 version with the dialogue template llama3 and the base model hf. Thank you again for your work. Very helpful to me :)

same problem. The output from model is a bunch of exclamation marks

ganliqiang commented 1 month ago

Nice work! I am using the latest llava-llama3 model downloaded from huggingface and attempting to use it directly for Lora. When I directly use the model without Lora, I will repeatedly output the final text content on my task until the maximum length, and I suspect it may be related to EOS. In addition, when I try to use Lora, the output becomes strange and even produces some content that is not a word. Is this related to me directly using the original version of llava's finetune task_lora? I only replaced the llava-llama3 version with the dialogue template llama3 and the base model hf. Thank you again for your work. Very helpful to me :)

same problem. The output from model is a bunch of exclamation marks

have the same proble when i set model_bash=llava_meta_llama,but when set model_base=llama,the result is correct, how i finetuen from the llava_llama3 instead of from scrach the llama3?