Closed AmazDeng closed 1 month ago
The loading process of vlm is: load vision model -> load llm weight -> allocate kv cache.
For llava-v1.5-7b, the first two steps will takes up about 14.5G cuda memory. But according to your log, the out of memory
occured in step 2. Is there any other programs taking up gpu memory?
The loading process of vlm is: load vision model -> load llm weight -> allocate kv cache.
For llava-v1.5-7b, the first two steps will takes up about 14.5G cuda memory. But according to your log, the
out of memory
occured in step 2. Is there any other programs taking up gpu memory?
no,there is only one lmdeploy program running on the gpu.
Can you try run the code without jupyter or ipython.
Can you try run the code without jupyter or ipython.
I only have one card and it's currently running a program, so I can't test it right now. I'll test it next week and will share the results then.
did you solve this problem? i encounter the exactly the same promblem.
did you solve this problem? i encounter the exactly the same promblem.
I didn't solve this problem; I switched to a different architecture instead.
which architecture did you shift to?
This issue is marked as stale because it has been marked as invalid or awaiting response for 7 days without any further response. It will be closed in 5 days if the stale label is not removed or if there is no further response.
This issue is closed because it has been stale for 5 days. Please open a new issue if you have similar issues or you have any new updates now.
Checklist
Describe the bug
I hava one A100 gpu card. following the instruction (https://github.com/InternLM/lmdeploy/blob/main/docs/zh_cn/inference/vl_pipeline.md), run HelloWorld llava program,error occurs. The llava model is llava-v1.5-7b, and is not very big. So, why "cuda out of memory" error occurs?
error info:
Reproduction
Environment
Error traceback
No response