Open Lin-Tianwei opened 3 months ago
For the llava1.5-7b model, using the lora weight on hugging face, my score on mme is only 1344.9, and the result of my own training score is 1372.7, which is far lower than the reported score 1510.7. Can someone explain the situation? Thanks
Using lora finetuing on llava_mix_665k?
Question
For the llava1.5-7b model, using the lora weight on hugging face, my score on mme is only 1344.9, and the result of my own training score is 1372.7, which is far lower than the reported score 1510.7. Can someone explain the situation? Thanks