dongyh20 / Insight-V

Insight-V: Exploring Long-Chain Visual Reasoning with Multimodal Large Language Models
87 stars 3 forks source link

Great work! #3

Open linzhiqiu opened 1 week ago

linzhiqiu commented 1 week ago

Hey,

I am Zhiqiu Lin, a final-year PhD student at Carnegie Mellon University working with Prof. Deva Ramanan. Your work on is very interesting and impressive!

I wanted to share NaturalBench (NeurIPS'24 D&B) in case you are looking for better VQA benchmarks:

NaturalBench (https://linzhiqiu.github.io/papers/naturalbench/) is a vision-centric benchmark that challenges vision-language models with pairs of simple questions about natural imagery. Unlike prior VQA benchmarks (like MME and ScienceQA), which blind language models (e.g., GPT-3.5) can solve, NaturalBench ensures such shortcuts won’t work. We evaluated 53 state-of-the-art models, and even top models like GPT-4o and Qwen2-VL fall 50%-70% short of human accuracy (90%+), revealing significant room for improvement.

We also found that current models show strong answer biases, such as favoring “Yes” over “No” regardless of the input. Correcting these biases can boost performance by 2-3x, even for GPT-4o, making NaturalBench a valuable testbed for future debiasing techniques.

Check out my Twitter post about it here: https://x.com/ZhiqiuLin/status/1848454555341885808.

🚀 Start using NaturalBench: https://github.com/Baiqi-Li/NaturalBench

Best, Zhiqiu