open-compass / VLMEvalKit

Open-source evaluation toolkit of large vision-language models (LVLMs), support ~100 VLMs, 40+ benchmarks
https://huggingface.co/spaces/opencompass/open_vlm_leaderboard
Apache License 2.0
1.08k stars 154 forks source link

how to run on multi-gpu with device_map='auto' #482

Open qianwangn opened 6 days ago

qianwangn commented 6 days ago

when I use 34B llm model, single-gpu will report OOM. so I set device_map='auto', but It seems cant use torchrun, It takes too much time to inference. how to solve this problem?

kennymckormick commented 4 days ago

Hi, which VLM are you using?