Closed izhuhaoran closed 1 month ago
you may want to read some doc about https://zdevito.github.io/2022/08/04/cuda-caching-allocator.html .
TL;DR; never use gpu_memory_ utilization=1.0
. There are lots of factors that can take unexpected memory. You cannot control every MB of GPU memory you have.
you may want to read some doc about https://zdevito.github.io/2022/08/04/cuda-caching-allocator.html .
TL;DR; never use
gpu_memory_ utilization=1.0
. There are lots of factors that can take unexpected memory. You cannot control every MB of GPU memory you have.
Thank you for your advice, I will look into the relevant content you have provided.
Your current environment
🐛 Describe the bug
I am running a test python script
test_llm.py
, test_llm.py code is as follows:Click to expand test_llm.py
```python import torch from vllm import LLM, SamplingParams import random import random import argparse import time random.seed(0) # Set the random seed for reproducibility _MB = 1 << 20 dummy_prompt = "hello " * 2000 prompts = [dummy_prompt for _ in range(512)] def test_llm(model:str, n, max_tokens, tp_size): prompts_choose = prompts[:n] # print(prompts_choose) # Create a sampling params object. sampling_params = SamplingParams(temperature=0.0, top_p=1.0, max_tokens=max_tokens, ignore_eos=True) # Create an LLM. llm = LLM(model=model, trust_remote_code=True, enforce_eager=True, disable_log_stats=False, max_num_seqs=n, tensor_parallel_size=tp_size, disable_custom_all_reduce=True, gpu_memory_utilization=1.0) # Generate texts from the prompts. The output is a list of RequestOutput objects # that contain the prompt, generated text, and other information. torch.cuda.synchronize() time1 = time.perf_counter() outputs = llm.generate(prompts_choose, sampling_params) torch.cuda.synchronize() time2 = time.perf_counter() free_gpu_memory, total_gpu_memory = torch.cuda.mem_get_info() print( f"use_gpu_memory: {(total_gpu_memory - free_gpu_memory)/_MB:.4f} MB, " f"free_gpu_memory: {free_gpu_memory/_MB:.4f} MB, " f"total_gpu_memory: {total_gpu_memory/_MB:.4f} MB" ) print(f"\nllm.generate over. All Generate Time: {time2 - time1:.5f} s\n") # # Print the outputs. # for output in outputs: # prompt = output.prompt # generated_text = output.outputs[0].text # # print(f"Prompt: {prompt!r},\n") # print(f"Generated text: {generated_text!r}\n") def test(): parser = argparse.ArgumentParser(description='Test LLM') parser.add_argument('-n', type=int, default=256, help='Number of prompts') parser.add_argument('-max_tokens', type=int, default=128, help='Maximum number of tokens') parser.add_argument('-tp_size', type=int, default=1, help='Tensor Parallel Size') parser.add_argument('-model', type=str, help='Model path') args = parser.parse_args() n = args.n max_tokens = args.max_tokens tp_size = args.tp_size model = args.model test_llm(model, n, max_tokens, tp_size) test() ```run command is as follows:
When I use model: qwen-7b-chat ,gpu_memory_utilization=1.0, it crashes inexplicably, with the error: torch.cuda.OutOfMemoryError: CUDA out of memory.
The error output is:
Then when I looked into the code for the profile_run function and added some log in model forward, I found something that might be questionable:
1. Overestimation of num_blocks in determine_num_available_blocks func:
In most cases, there is a gap between init_gpu_memory and total_gpu_memory, where using total_gpu_memory to calculate num_gpu_blocks will likely result in incorrectly increasing the amount of space available for kv cache, which will cause an OOM error when gpumemory utilization=1.0
So, I tried to modify the above code to:
After that, I rerun the test and the output is
OOM still occurs, but GPU blocks are reduced from 7877 to 7823 compared to before the modification, and the run progress is increased from 0% to 23%
2. Strange gpu memory usage increase in _allocate_kv_cache
I added gpu memory usage prints before and after gpu_cache and cpu_cache alloc.
gpu usage print is:
The gpu cache shape is (2, 7823, 16, 32, 128), (layer=32, type_size = bfloat16 / 8 = 2) , its gpu mem shoule be 32 2 7823 16 32 128 2 = 62584 MB. But the above printout shows that the memory before and after gpu_cache alloc is 15271.3750 and 77863.3750 MB, with a difference of 62592 > 62584. And there are memory changes before and after cpu_cache alloc(77863.3750 MB to 77871.3750 MB), but it's on the cpu, so theoretically, the gpu mem should be unchanged.
These strange gpu mem increases can further reduce the available space for model forward activation, resulting in OOM
Also, I separately micro-tested the gpu mem usage of _allocate_kv_cache using the following test_alloc_mem.py script
Click to expand test_alloc_mem.py
```python import torch from typing import List, Tuple import torch.nn.functional as F import gc def print_memory_usage(info: str, sync: bool = True, empty_cache: bool = False, collect: bool = False): get_info = True sync = True empty_cache = True collect = False _MB = 1 << 20 if sync: torch.cuda.synchronize() if empty_cache: torch.cuda.empty_cache() if collect: gc.collect() free_gpu_memory, total_gpu_memory = torch.cuda.mem_get_info() print( f"{info}: " f"use_gpu_memory: {(total_gpu_memory - free_gpu_memory)/_MB:.4f} MB, " f"free_gpu_memory: {free_gpu_memory/_MB:.4f} MB, " f"total_gpu_memory: {total_gpu_memory/_MB:.4f} MB" ) return (total_gpu_memory - free_gpu_memory) / _MB def test_alloc(device: str = "cuda"): kv_cache: List[torch.Tensor] = [] # key/value, num_blocks, block_size, num_heads, head_dim kv_cache_shape = (2, 7823, 16, 32, 128) # kv_cache_shape = (2, 7819, 16, 32, 128) dtype = torch.bfloat16 pin_memory = True if device == "cpu" else False # pin_memory = False for _ in range(32): # null block in CpuGpuBlockAllocator requires at least that # block to be zeroed-out. # We zero-out everything for simplicity. kv_cache.append( torch.zeros(kv_cache_shape, dtype=dtype, pin_memory=pin_memory, device=device)) return kv_cache def test(): print_memory_usage("before allocate cuda a") a = test_alloc("cuda") print_memory_usage("after allocate cuda a") print_memory_usage("before allocate cpu b") b = test_alloc("cpu") print_memory_usage("after allocate cpu b") test() ```The result is the same as described above, there is also a strange increase in gpu memory.
My thoughts: For gpu_cache the occupied size is larger than the theoretical value, my guess is that there is some alignment strategy in torch's memory management that causes this. For cpu_cache causing gpu mem to increase in size, I can't understand it, but when I force pin_memory = False, gpu mem no longer increases in size. And when I reduce the num_blocks from 7823 to 7819, the gpu memory usage remains the same as 7823, which further suggests that there is some kind of memory alignment strategy in torch that compresses the available space for the activation, which is easy to happen oom when the gpu_memory_utilization is large.
Though, we can reduce gpu_memory_utilization to avoid oom, but that makes it difficult to maximize the use of gpu mem. Therefore, we may need to modify the
determine_num_available_blocks
orprofile_run
func to take these factors into account, to avoid oom, so that we can safely set gpu_memory_utilization=1.0 to fully utilize the gpu resources.