effusiveperiscope / so-vits-svc

so-vits-svc
MIT License
179 stars 72 forks source link

Training via CPU instead of GPU #23

Closed mya2152 closed 1 year ago

mya2152 commented 1 year ago

Would this be possible, I have around 48gb of system ram but my GPU is a notebook M1000M laptop GPU which has 2GB which is insufficient, I understand CPU training would take significantly longer but still wish to switch as I don't have the graphics hardware.

WIN 10 X64 VSCode, IPYNB jupyter notebook

mya2152 commented 1 year ago

This is for the train.py file, there are multiple instances of cuda being used in the code and I'm just wondering if there was a simple way to switch over to entirely CPU processing

mya2152 commented 1 year ago

Unless there is a way to get past this issue:

torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 58.00 MiB (GPU 0; 2.00 GiB total capacity; 1.66 GiB already allocated; 0 bytes free; 1.72 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

effusiveperiscope commented 1 year ago

To my knowledge nobody has attempted to do CPU training. Typically 16GB VRAM is needed for training. If you really want to try doing it on CPU I would try removing the cuda() functions in train.py that are used for moving tensors to the GPU.