haotian-liu / LLaVA

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
https://llava.hliu.cc
Apache License 2.0
18.07k stars 1.96k forks source link

[Question] Cannot reproduce lora-mme #1362

Open Lin-Tianwei opened 3 months ago

Lin-Tianwei commented 3 months ago

Question

For the llava1.5-7b model, using the lora weight on hugging face, my score on mme is only 1344.9, and the result of my own training score is 1372.7, which is far lower than the reported score 1510.7. Can someone explain the situation? Thanks

OliverLeeXZ commented 2 months ago

Using lora finetuing on llava_mix_665k?