EvolvingLMMs-Lab / lmms-eval

Accelerating the development of large multimodal models (LMMs) with lmms-eval
https://lmms-lab.github.io/
Other
1.03k stars 53 forks source link

OOM issue #49

Closed yiyexy closed 2 months ago

yiyexy commented 2 months ago

When I tried to load the llava-qwen72B model, I encountered an out-of-memory issue on the H800 graphics card. It seems that this framework assigns a complete model to each GPU. How can I slice the model so that it doesn't cause an out-of-memory problem?

kcz358 commented 2 months ago

Hi, you can refer to this #12 and #4