Closed amink69 closed 2 weeks ago
If you're getting Cuda out of memory
it's because you just don't have enough vram to train. If even setting batch size to 1 still causes the error, there's not much you can do unless you upgrade. 4gb seems to be the shaky point where getting this error is very common.
hi i have nvidia 3050ti GPU wth amd ryzen 7 5800H but every time i want to train a model i got an error : torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 20.00 MiB (GPU 0; 4.00 GiB total capacity; 3.36 GiB already allocated; 0 bytes free; 3.44 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
i use the latest python my windows is 11 and there is no problem but i can't train
i use lowest settings but i saw this error every single time i google for this and search in github but i didn't find any solution i set batch size 1 and highest cpu and bla bla bla but every time i got error like this please help me thanks