philschmid / llm-sagemaker-sample

Apache License 2.0
49 stars 21 forks source link

OutOfMemoryError when trying to fine-tune llama3.1 #25

Open korneevm opened 1 month ago

korneevm commented 1 month ago

Hi Phil, thanks for the great repo and examples!

Everything worked well when I played with llama3-70b using your guide, but now I'm stuck when fine-tuning llama3.1-70b.

I've done all the steps from the https://www.philschmid.de/sagemaker-train-deploy-llama3 article and then managed to fix problems with incompatible package versions and start the training process. But on the "Loading checkpoint shards" step I'm getting an error:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 3.91 GiB. GPU 2 has a total capacty of 39.39 GiB of which 1.05 GiB is free. Including non-PyTorch memory, this process has 0 bytes memory in use. Of the allocated memory 36.82 GiB is allocated by PyTorch, and 109.46 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I've tried to overcome this problem but no success. Maybe you could point out what I'm missing.

philschmid commented 1 month ago

what instance are you trying to use?

korneevm commented 1 month ago

ml.p4d.24xlarge like in tutorial

korneevm commented 1 month ago

I've tried using ml.p4de.24xlarge for training and it worked well. I've had to make some minor adjustments in code - I can make PR if you are interested

philschmid commented 1 month ago

Yes could you share what you needed to change?