BAAI-DCAI / Bunny

A family of lightweight multimodal models.
Apache License 2.0
865 stars 65 forks source link

what is the performance gap between lora and full finetune #30

Closed FaltingsA closed 4 months ago

Isaachhh commented 5 months ago

Thank you for your interest.

Our early experiments found that, there exists 2 score improvement of LoRA than fully fine-tuning in MMBench dev split (64.69 vs 62.46), under certain setting.

Should you wish to proceed, feel free to try to fully fine-tuning Bunny and see the performance gap, which has been supported by our code.

Isaachhh commented 5 months ago

For SigLIP + Phi-2 Screenshot 2024-03-30 at 09 50 28

FaltingsA commented 5 months ago

Thanks for your comprehensive experimental results! We are all very interested in the effective reasons of Bunny, and it would be nice to show more ablations about data, architecture in the technical report! Thanks a lot!

Isaachhh commented 5 months ago

We are working on that. (:

Isaachhh commented 4 months ago

Close the issue for now if there's no further discussions. Feel free to reopen it if there's any other questions.