OpenBMB / MiniCPM

MiniCPM3-4B: An edge-side LLM that surpasses GPT-3.5-Turbo.
Apache License 2.0
7.13k stars 454 forks source link

vllm运行你们给的demo需要多少显存 #193

Closed lifelsl closed 2 months ago

lifelsl commented 2 months ago

你好,我在尝试vllm推理时,运行你们给的inference_vllm.py,代码没改,但是50g显存为什么都会报错out of memory,我也不知道为什么,到底需要多少显存,我使用的是vllm(0.4.2) INFO 08-25 08:17:00 llm_engine.py:100] Initializing an LLM engine (v0.4.2) with config: model='/data2/liushuliang/MiniCPM/OpenBMB/MiniCPM-2B-sft-bf16', speculative_config=None, tokenizer='/data2/liushuliang/MiniCPM/OpenBMB/MiniCPM-2B-sft-bf16', skip_tokenizer_init=False, tokenizer_mode=auto, revision=None, tokenizer_revision=None, trust_remote_code=True, dtype=torch.bfloat16, max_seq_len=4096, download_dir=None, load_format=LoadFormat.AUTO, tensor_parallel_size=1, disable_custom_all_reduce=False, quantization=None, enforce_eager=False, kv_cache_dtype=auto, quantization_param_path=None, device_config=cuda, decoding_config=DecodingConfig(guided_decoding_backend='outlines'), seed=0, served_model_name=/data2/liushuliang/MiniCPM/OpenBMB/MiniCPM-2B-sft-bf16) INFO 08-25 08:17:00 utils.py:660] Found nccl from library /home/liushuliang/.config/vllm/nccl/cu11/libnccl.so.2.18.1 INFO 08-25 08:17:02 selector.py:27] Using FlashAttention-2 backend. INFO 08-25 08:17:11 model_runner.py:175] Loading model weights took 5.1039 GB INFO 08-25 08:17:12 gpu_executor.py:114] # GPU blocks: 11749, # CPU blocks: 728 rank0: Traceback (most recent call last): rank0: File "/data2/liushuliang/MiniCPM/inference/inference_vllm.py", line 46, in rank0: llm = LLM(model=args.model_path, tensor_parallel_size=1, dtype='bfloat16',trust_remote_code=True) rank0: File "/data1/liushuliang/anaconda3/envs/MiniCPM/lib/python3.10/site-packages/vllm/entrypoints/llm.py", line 123, in init rank0: self.llm_engine = LLMEngine.from_engine_args( rank0: File "/data1/liushuliang/anaconda3/envs/MiniCPM/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 292, in from_engine_args rank0: engine = cls( rank0: File "/data1/liushuliang/anaconda3/envs/MiniCPM/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 172, in init

rank0: File "/data1/liushuliang/anaconda3/envs/MiniCPM/lib/python3.10/site-packages/vllm/engine/llm_engine.py", line 262, in _initialize_kv_caches rank0: self.model_executor.initialize_cache(num_gpu_blocks, num_cpu_blocks) rank0: File "/data1/liushuliang/anaconda3/envs/MiniCPM/lib/python3.10/site-packages/vllm/executor/gpu_executor.py", line 117, in initialize_cache rank0: self.driver_worker.initialize_cache(num_gpu_blocks, num_cpu_blocks) rank0: File "/data1/liushuliang/anaconda3/envs/MiniCPM/lib/python3.10/site-packages/vllm/worker/worker.py", line 179, in initialize_cache

rank0: File "/data1/liushuliang/anaconda3/envs/MiniCPM/lib/python3.10/site-packages/vllm/worker/worker.py", line 184, in _init_cache_engine rank0: self.cache_engine = CacheEngine(self.cache_config, self.model_config, rank0: File "/data1/liushuliang/anaconda3/envs/MiniCPM/lib/python3.10/site-packages/vllm/worker/cache_engine.py", line 49, in init rank0: self.gpu_cache = self._allocate_kv_cache(self.num_gpu_blocks, "cuda") rank0: File "/data1/liushuliang/anaconda3/envs/MiniCPM/lib/python3.10/site-packages/vllm/worker/cache_engine.py", line 64, in _allocate_kv_cache

rank0: torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 1.62 GiB. GPU

lifelsl commented 2 months ago

已经解决,调整了一下gpu_memory_utilization,设置低一点就行了