-
How can I resolve an error message when using the pip install - e. command to install vlm-evaluation?
![image](https://github.com/TRI-ML/vlm-evaluation/assets/20516638/8f20a07e-3327-4b4e-a77a-b6f69d5…
-
Traceback (most recent call last):
File "/dataset-vlm/jingyaoli/LLMInfer/InfLLM/benchmark/pred.py", line 327, in
preds = get_pred(
File "/dataset-vlm/jingyaoli/LLMInfer/InfLLM/benchmark/pr…
-
We met an error:
`[2024-09-23 11:13:54,886] [INFO] [launch.py:315:sigkill_handler] Killing subprocess 123969
[2024-09-23 11:13:54,887] [ERROR] [launch.py:321:sigkill_handler] `
with with return co…
-
https://molmo.allenai.org/blog
-
Since we have now supported the multi-turn benchmark MMDU, we would like to implement the `chat_inner` function for existing VLMs in VLMEvalKit add support for multi-turn chatting.
Currently, we hav…
-
I don't know if this occurs on all affected vector instructions or not, but it occurs on the `VLM` instruction:
42 ******************************…
-
Hello 👋
First of all thank you for the great work and evaluation results!
I have understood that in many cases you predicted outputs for each question based on the choice that minimizes the loss…
-
### Feature Description
The default maximum context window for all text models is hard-coded at 2048 tokens, but many models have a context window much larger than this. I would like to have an ar…
-
### System Info
### what i want
So I want a solution that can quickly generate AI output by efficiently using precompute kv caches of text and images of all previous prompts!
### by using…
-
timm v1.0.3 was just released 2 hours ago (https://github.com/huggingface/pytorch-image-models/releases/tag/v1.0.3) and it seems like they've reworked the API for `forward_intermediates()` and it retu…