Open dongwhfdyer opened 2 months ago
now i know it. look at the llava project. you would find the two-stage weight-loading methods. if anyone still don't know, contact me
Thanks, @dongwhfdyer , I already figured it out.
now i know it. look at the llava project. you would find the two-stage weight-loading methods. if anyone still don't know, contact me
hi there , i am trying out this model and the demo worked but when i used the lora.sh script for training it displays OSError: Error no file named pytorch Model. bin, tf Model. h5, model. ckpt. index or flex_ Model. msgpack found in directory/home/LaVA/lava v1.5-13b lora . can you guide me how can i train this model ?
@dongwhfdyer hi, In the finetune_lora.sh --pretrain_mm_mlp_adapter path/to/llava-v1.5-mlp2x-336px-pretrain-vicuna-7b-v1.5/mm_projector.bin, l have a issue. Is the mm_projector.bin file using weights from llava-v1.5-7b? I couldn't find mm_projector.bin in Geochat-7B.
I have followed the instructions of
finetune_lora.sh
and got the trained model.this is my
finetune_lora.sh
here is the saved lora fine_tuned model.
I don't know how to load this model, I didn't find it in
readme.md
. can anyone help me? Thank you!