Closed mya2152 closed 1 year ago
This is for the train.py file, there are multiple instances of cuda being used in the code and I'm just wondering if there was a simple way to switch over to entirely CPU processing
Unless there is a way to get past this issue:
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 58.00 MiB (GPU 0; 2.00 GiB total capacity; 1.66 GiB already allocated; 0 bytes free; 1.72 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
To my knowledge nobody has attempted to do CPU training. Typically 16GB VRAM is needed for training. If you really want to try doing it on CPU I would try removing the cuda() functions in train.py that are used for moving tensors to the GPU.
Would this be possible, I have around 48gb of system ram but my GPU is a notebook M1000M laptop GPU which has 2GB which is insufficient, I understand CPU training would take significantly longer but still wish to switch as I don't have the graphics hardware.
WIN 10 X64 VSCode, IPYNB jupyter notebook