OpenBMB / MiniCPM-V

MiniCPM-Llama3-V 2.5: A GPT-4V Level Multimodal LLM on Your Phone
Apache License 2.0
7.82k stars 543 forks source link

[BUG] <title>out of memory #299

Closed limllzu closed 4 days ago

limllzu commented 1 week ago

是否已有关于该错误的issue或讨论? | Is there an existing issue / discussion for this?

该问题是否在FAQ中有解答? | Is there an existing answer for this in FAQ?

当前行为 | Current Behavior

请问全参微调需要多大的显存,我用了7块40G的显卡跑,但是还是out of memory,我将model_max_length改为512还是不行,我还应该修改哪些参数?

报错信息: torch.cuda.OutOfMemoryError : self.optimizer.step()CUDA out of memory. Tried to allocate 4.54 GiB. GPU 6 has a total capacty of 39.39 GiB of which 590.06 MiB is free. Including non-PyTorch memory, this process has 38.81 GiB memory in use. Of the allocated memory 34.27 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

torch.cudatorch.cuda..OutOfMemoryErrorOutOfMemoryError: : CUDA out of memory. Tried to allocate 4.54 GiB. GPU 0 has a total capacty of 39.39 GiB of which 596.06 MiB is free. Including non-PyTorch memory, this process has 38.81 GiB memory in use. Of the allocated memory 34.26 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF CUDA out of memory. Tried to allocate 4.54 GiB. GPU 2 has a total capacty of 39.39 GiB of which 542.06 MiB is free. Including non-PyTorch memory, this process has 38.86 GiB memory in use. Of the allocated memory 34.32 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.54 GiB. GPU 3 has a total capacty of 39.39 GiB of which 510.06 MiB is free. Including non-PyTorch memory, this process has 38.89 GiB memory in use. Of the allocated memory 34.35 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

OutOfMemoryError: CUDA out of memory. Tried to allocate 4.54 GiB. GPU 4 has a total capacty of 39.39 GiB of which 430.06 MiB is free. Including non-PyTorch memory, this process has 38.97 GiB memory in use. Of the allocated memory 34.43 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.54 GiB. GPU 5 has a total capacty of 39.39 GiB of which 542.06 MiB is free. Including non-PyTorch memory, this process has 38.86 GiB memory in use. Of the allocated memory 34.32 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.54 GiB. GPU 1 has a total capacty of 39.39 GiB of which 478.06 MiB is free. Including non-PyTorch memory, this process has 38.92 GiB memory in use. Of the allocated memory 34.38 GiB is allocated by PyTorch, and 2.76 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

GPU内存利用率: 微信图片_20240626160724

期望行为 | Expected Behavior

No response

复现方法 | Steps To Reproduce

No response

运行环境 | Environment

- Python:3.10
- Transformers:4.40.0
- PyTorch:2.1.2
- CUDA 11.8

备注 | Anything else?

No response