Open yananchen1989 opened 3 weeks ago
I have same bug. My GPU have a lot of memory But ocurre this log : torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 8.00 GiB. GPU 0 has a total capacity of 23.55 GiB of which 5.58 GiB is free. Including non-PyTorch memory, this process has 17.96 GiB memory in use. Of the allocated memory 17.00 GiB is allocated by PyTorch, and 380.25 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.
this is my command: python -m vllm.entrypoints.openai.api_server --model /home/dj/work/models/minicpm-v-2_6/ --served-model-name "minicpmv" --gpu_memory_utilization 0.5 --trust-remote-code --device cuda --tensor-parallel-size 4
nomatter I set --tensor-parallel-size 4 or not. The bug is ocurr. This is my package.
same issue here
yep same issue here.
Your current environment
vllm version: 0.5.4
gpu 24GB memory
🐛 Describe the bug
works fine.
also works fine.
however,
will always cause error of OOM, no mater on single gpu (24GB) or on two gpus.