Open RogerRafa opened 1 month ago
When using LLaVA 1.5 and noticing that evaluation_strategy in the pretrain.sh script is set to "no", it means the model won't evaluate its performance on a validation set during pretraining. To determine if the model is being trained well without automatic evaluation, you can try the following methods: Periodically save checkpoints during training and manually evaluate them using a separate validation script. You can monitor key metrics like accuracy, loss, or any task-specific evaluation metric on a validation dataset. Modify the training script to save checkpoints at certain intervals and load them separately for validation.
Thank you!I'll give it a try.
Question
I want to pretrain the model, but I see that the evaluation_strategy in pretrain.sh is set to "no." How can I determine if the model is trained well?