EvolvingLMMs-Lab / lmms-eval

Accelerating the development of large multimodal models (LMMs) with lmms-eval
https://lmms-lab.github.io/
Other
1.03k stars 54 forks source link

Refcoco Metric #17

Open Jiahaohong opened 3 months ago

Jiahaohong commented 3 months ago

The metric used for evaluation on the refcoco dataset is CIDEr, so why not use the Referring Expression Comprehension (REC) task to evaluate it?