haotian-liu / LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
https://llava.hliu.cc
Apache License 2.0
19.54k stars 2.15k forks source link

How to perform cross-validation during the fine-tuning process?[Question] #1536

Open J0eky opened 3 months ago

J0eky commented 3 months ago

Question

I have been fine-tuning the llava-v1.5-13b model on my own dataset using the finetune_task_lora.sh script. While the training phase performed well, I observed an absence of any validation process throughout the training, potentially culminating in overfitting concerns. It would be grateful if someone could offer some advice.

narayanasastry-rvds commented 2 weeks ago

I am finetuning llava-v1.6-vicuna-7b and want to know how can I calculate validation loss during lora fine-tuning