mistralai / mistral-finetune

Apache License 2.0
2.45k stars 164 forks source link

CUDA out of memory during training #69

Open CodeWithOz opened 3 weeks ago

CodeWithOz commented 3 weeks ago

I keep getting "CUDA out of memory" during training when finetuning Mistral 7B. My hardware is an NVIDIA A10G single GPU with 24GB GPU memory. The error message looks like this:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 896.00 MiB. GPU 0 has a total capacity of 21.99 GiB of which 759.38 MiB is free. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 20.52 GiB is allocated by PyTorch, and 102.08 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

The value of reserved/unallocated memory ranges between 82MB and 1.08GB, so the error appears to happen both when there's enough and not enough memory.

I've tried the following measures with no success:

The README says best results require an A100 or H100, but single GPU machines can work with Mistral 7B. Given that I've tried to minimize so many parameters, is it really the case that bigger hardware is the only way forward?

CodeWithOz commented 3 weeks ago

UPDATE: I tried using an A100 single GPU with 40GB GPU memory and the same error happened. Seems there's a memory leak somewhere because the process just used up all the available memory. Here's the updated error message:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 4.00 GiB. GPU 0 has a total capacity of 39.39 GiB of which 1.33 GiB is free. Process 35938 has 38.05 GiB memory in use. Of the allocated memory 35.03 GiB is allocated by PyTorch, and 2.21 GiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting PYTORCH_CUDA_ALLOC_CONF=expandable_segments:True to avoid fragmentation. See documentation for Memory Management (https://pytorch.org/docs/stable/notes/cuda.html#environment-variables)

CodeWithOz commented 3 weeks ago

I'm renting my GPUs from brev.dev by the way.

zazabap commented 1 week ago

Is this issue resolved? I kind of encounter exactly same issue.

matheus-prandini commented 1 week ago

@CodeWithOz @zazabap Was the GPU running only the training? Can you provide the command you used to run the training? Additionally, what libraries and versions are you using?