-
Great work! I notice the LLaVA-NeXT-Qwen2 (image model) can achieve a surprising 49.5 Video-MME results. In contrast, the LLaVA-NeXT-Video (Llama3) can only achieve a 30+ Video-MME score (according to…
-
In the file `/root/lmms-eval/lmms_eval/models/__init__.py`, no output is printed if an ImportError occurs. This can lead to confusion for users who encounter a ValueError stating 'Attempted to load mo…
-
### Description
GLMMs have often problems with converging -- JASP currently throws an error about a problem with the optimizer rutine, however, it does not allow to adjust it.
### Purpose
_No respo…
-
Hi All,
As I'm sure you're aware, linear mixed models and generalised linear mixed models are becoming increasingly widely used in psychology and in other sciences. I'm sure it would be useful to …
-
At the current stage there's a somehow simple way to include new tasks using the --include_external flag, yet there's no way to include external models except from cloning the lmms-eval repository and…
-
Hi,
When testing models on llava model and videomme benchmarks, there is one error when loading videos. The `process_images` function can not read mp4 files.
https://github.com/EvolvingLMMs-Lab/…
-
I wonder what the "32K" signifies when using the "lmms-lab/LLaVA-NeXT-Video-7B-32K" checkpoint.
-
https://github.com/EvolvingLMMs-Lab/lmms-eval/blob/24dc435908d921e9f1a5706e3141b12e5d838d18/lmms_eval/models/instructblip.py#L9
It seems that the mmmu task does not have this utils_group_img file.
-
I used the lmms-eval repo to evaluate my model and got the following result:
So I used the evaluation script in llava and got the following result:
Why are they so different?
-
(Click to enlarge)
![lmms_preview](https://user-images.githubusercontent.com/3619927/28041705-c3864098-65ca-11e7-9d8c-5b8108941c18.png)
---
Hi all,
My previous work for Zynaddsubfx has sam…