EvolvingLMMs-Lab / lmms-eval

Accelerating the development of large multimodal models (LMMs) with lmms-eval
https://lmms-lab.github.io/
Other
1.02k stars 52 forks source link

add MM-UPD #95

Closed AtsuMiyai closed 3 weeks ago

AtsuMiyai commented 4 weeks ago

Thanks for your comprehensive codebase, which is very beneficial to the community!

Could you please add our UPD to your codebase?

I have tested the performances with LLaVA-1.5-13B and LLaVA-NeXT-34B and confirmed that the results are aligned with the paper's results.

Thank you.

AtsuMiyai commented 3 weeks ago

@kcz358 Thanks for your feedback! Yes, we can merge them and remove the redundant _default_template. I have updated the codes, so could you please check again?

AtsuMiyai commented 3 weeks ago

Super cool! I'll share screenshots of all the settings by testing LLaVA with the current code! (Maybe, I'll share them tomorrow.)

AtsuMiyai commented 3 weeks ago

@kcz358 I slightly changedquery_prompt in util.py to exactly match the official UPD code.

This is a screenshot with LLaVA1.5-13B.

スクリーンショット 2024-06-02 23 29 34

Could you please merge our code?

kcz358 commented 3 weeks ago

Hi @Luodian , I have reviewed this PR and currently LGTM for me. Do you want to do a final check and review before merge?

AtsuMiyai commented 3 weeks ago

I really apologize for the many minor updates. I have added code to output the json format for the leaderboard submission that I am working on concurrently, as I need to output the details of the results in json format. 🙇

Luodian commented 3 weeks ago

Looks good, I think can merge now, thanks for contributing and reviewing!