Open uniquehou opened 8 months ago
same too. how to resolve it ?
@uniquehou how to resolve it ? i meet same problem
@uniquehou how to resolve it ? i meet same problem
Not yet. We're checking for overfitting.
Hi, i am dealing with finetuning overfit. I wonder if your 250k data follows the standard fine-tuning data format? And do you change your loss to classification loss,or keep the language modeling loss? Thanks a lot!
Discussion
Hi, I tried LLaVA-1.5(13B), training an image classification task, with good results, but now I'm running into a problem. Before I used 250k data (finetune lora), the precision went up with more training data. Later, we added 200k data and the indicator dropped by 6-7% instead. We are now basically sure that the new data distribution is the same as before, so what else could be the reason for the decline?