jxzhanggg / nonparaSeq2seqVC_code

Implementation code of non-parallel sequence-to-sequence VC
MIT License
250 stars 56 forks source link

GPU memory requirements #32

Closed ivancarapinha closed 4 years ago

ivancarapinha commented 4 years ago

Hello, What are the GPU memory requirements to use the model? I am using a GeForce GTX TITAN X with 12G of RAM and I got the following error: RuntimeError: CUDA out of memory. Tried to allocate 132.88 MiB (GPU 0; 11.93 GiB total capacity; 11.03 GiB already allocated; 107.44 MiB free; 308.29 MiB cached)

Do you have any suggestions about how to overcome this problem? Thank you.

jxzhanggg commented 4 years ago

Hi, 12G RAM should be enough for training the model. I trained my model with a batch size of 32 on a 12 G GTX1080Ti. Because a training batch is padedd to the longest sequence, super long utterances will cause the OOM problem. You can filter out these utterances by setting a threshold. For example, you can drop all utterances exceeding 10 seconds in training set.

ivancarapinha commented 4 years ago

I will definitely try that. Thanks for the recommendation!