issues
search
TRI-ML
/
vlm-evaluation
VLM Evaluation: Benchmark for VLMs, spanning text generation tasks from VQA to Captioning
Other
89
stars
10
forks
source link
issues
Newest
Newest
Most commented
Recently updated
Oldest
Least commented
Least recently updated
[Question] mismatch between bbox and image in RefCOCO
#16
WeitaiKang
opened
1 month ago
1
Inconsistent POPE expected number of examples
#15
iancovert
opened
1 month ago
0
Script to compute z scores.
#14
kushal-tri
closed
2 months ago
0
Transformer version conflict with prismatic-vlm
#13
tangwh20
opened
3 months ago
0
Support InstructBLIP AI2D Eval
#12
ashwin-balakrishna96
opened
6 months ago
0
conflict error: pip install - e .
#11
WenjunHuang94
opened
6 months ago
4
The issue of abnormal indicators.
#10
tayton42
opened
7 months ago
2
About the number of POPE dataset
#9
Hannibal046
closed
7 months ago
1
Question about the Dataset Type
#8
Hannibal046
closed
6 months ago
2
Evaluation for more datasets
#7
Lauch1ng
closed
7 months ago
1
merge hf-demo <- main
#6
matthiasbuchner
closed
7 months ago
0
Add in AI2D Eval
#5
ashwin-balakrishna96
closed
8 months ago
0
Evaluation hangs with accelerate over multiple gpus.
#4
tyleryzhu
opened
8 months ago
3
Slow model inference when evaluation
#3
zeyuanyin
closed
8 months ago
1
Error when evaluating on POPE-full
#2
djghosh13
closed
6 months ago
4
Infer llava model_dir if model_id is given.
#1
lukaemon
closed
9 months ago
0