Open jxgu1016 opened 2 months ago
I think I may take a look into this later. Maybe I will try to setup a quick image evaluation model for Qwen2-VL this weekend but maybe the result won't be exactly align for this version. Possibly later will add in video evaluation too and check the results.
For anyone that is willing to help, you are also welcome to raise a PR.
merged with pr https://github.com/EvolvingLMMs-Lab/lmms-eval/pull/268 👍
Thanks for your work! But I found OOM in Qwen2-VL-7B inference for batch-size1 on 40GB GPU. Is there any possible problem? By the way I can do it on cogvlm and llava under the same conditions
Any plan to support the latest Qwen2-VL model evaluation?