Closed asusdisciple closed 8 months ago
Hi, In my case, the memory is about 3GB for inference using "convert.py". (The samples I used) And 17GB for training with the original training setting.
However, if you use different samples it can be possible. (When the target or source sample is quite long.)
If the target speech is quite long, you can take another process for conversion.
Thanks/
Indeed it was the long sample, this solved my problem thanks a lot!
May I ask if you have any experience with strange memory behaviour? I tried to do inference on a V100 with 32GB memory. However it seems the model tries to allocate more than 25 GB which does not make sense if you used a single RTX 3090. By the way the GPU is completely free (RAM) according to nvidia-smi before I start convert.py Here is my error log: