-
### Describe the issue
Congratulations on your excellent work!
I am trying to reproduce the results of your work. It appears to miss convert_gqa_for_eval.py when evaluating eval_gqa.sh. Could you …
-
Can anyone give some details on how to convert these models like Bunny-v1.0-4B, Bunny-v1.1-4B, to gguf for llama.cpp?
-
Hi author,
I saw that there are 3 columns for LlavaBench scores (Relative Score, VLM Score, GPT4 Score), and seems like in evaluate code the Relative Score is calculated based on the VLM Score an…
-
This issue contains the test results for the upstream sync, develop PR, and release testing branches. Comment 'proceed with rebase' to approve. Close when maintenance is complete or there will be prob…
-
This issue contains the test results for the upstream sync, develop PR, and release testing branches. Comment 'proceed with rebase' to approve. Close when maintenance is complete or there will be prob…
-
First of all, great work!
Wondering if you have the plan to implement "Dynamic High Resolution" or "Anyres" (https://llava-vl.github.io/blog/2024-01-30-llava-next/) that would assist higher resolut…
-
For every model I've downloaded, the speed saturates my bandwidth (~13MB/sec) until it hits 98/99%. Then the download slows to a few tens of KB/s and takes hour(s) to finish.
I've tried multipl…
-
### Reminder
- [X] I have read the README and searched the existing issues.
### Reproduction
torchrun --nnodes 1 --nproc_per_node 1 src/train.py \
--stage sft \
--do_train \
--model_…
-
This issue contains the test results for the upstream sync, develop PR, and release testing branches. Comment 'proceed with rebase' to approve. Close when maintenance is complete or there will be prob…
-
Hello,
As the title suggested, I'm wondering if lmms-eval has plan for enabling evaluation with quantized lmms, e.g. those with [AWQ](https://github.com/mit-han-lab/llm-awq).
**Why this is neces…