vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
26.77k stars 3.92k forks source link

[Bug]: for mistral-7B, local batch inference mode causes OOM error, while serving mode does not cause error #7767

Open yananchen1989 opened 3 weeks ago

yananchen1989 commented 3 weeks ago

Your current environment

vllm version: 0.5.4

gpu 24GB memory

🐛 Describe the bug

CUDA_VISIBLE_DEVICES=0 vllm serve mistralai/Mistral-7B-Instruct-v0.3 --api-key yyy --port 1704 --gpu_memory_utilization 0.95 --max_model_len 8000 --dtype bfloat16

works fine.

CUDA_VISIBLE_DEVICES=0 vllm serve meta-llama/Meta-Llama-3.1-8B-Instruct --api-key yyy --port 1704 --gpu_memory_utilization 0.95 --max_model_len 8000 --dtype bfloat16

also works fine.

however,

llm = LLM(model= mistralai/Mistral-7B-Instruct-v0.3, dtype='bfloat16', max_model_len=8000, 
        tensor_parallel_size=torch.cuda.device_count(), gpu_memory_utilization=0.95
        )
responses = llm.generate(prompts,
                            sampling_params
                            )  

will always cause error of OOM, no mater on single gpu (24GB) or on two gpus.

Mysnake commented 3 weeks ago

I have same bug. My GPU have a lot of memory image But ocurre this log : torch.OutOfMemoryError: CUDA out of memory. Tried to allocate 8.00 GiB. GPU 0 has a total capacity of 23.55 GiB of which 5.58 GiB is free. Including non-PyTorch memory, this process has 17.96 GiB memory in use. Of the allocated memory 17.00 GiB is allocated by PyTorch, and 380.25 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation.

this is my command: python -m vllm.entrypoints.openai.api_server --model /home/dj/work/models/minicpm-v-2_6/ --served-model-name "minicpmv" --gpu_memory_utilization 0.5 --trust-remote-code --device cuda --tensor-parallel-size 4

nomatter I set --tensor-parallel-size 4 or not. The bug is ocurr. This is my package. image

iamseokhyun commented 2 weeks ago

same issue here

mayankjobanputra commented 1 week ago

yep same issue here.