Open JokeCorleone opened 4 years ago
Hello. use
python encoder_train.py my_run --clean_data_root D:\Datasets\SV2TTS\encoder
Hello @vlomme
Thank for your support,
When I used python encoder_train.py my_run --clean_data_root D:\Datasets\SV2TTS\encoder,
The result is
File "encoder_train.py", line 46, in
Hello, getting the same error, torch==1.5.0 I see that we have
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
# FIXME: currently, the gradient is None if loss_device is cuda
loss_device = torch.device("cpu")
after that if we are using clip_gradnorm from torch, it performs operation on all of the parameters, two of which are on cpu, and the rest on cuda:0
total_norm = torch.norm(torch.stack([torch.norm(p.grad.detach(), norm_type) for p in parameters]), norm_type)
which throws the error. Could it be that the torch version is incorrect? I'm using 1.5.0
[UPDATE] Reinstalled torch and it started training! pip uninstall torch # you might need to ran it twice, check with pip list | grep torch # that you don't have torch left pip install torch # or pip install torch==1.5.0 to ensure the version
Hello, When I trained vocoder (run python vocoder_train.py my_run D:\Datasets), I encountered an error:
+------------+--------+--------------+ | Batch size | LR | Sequence Len | +------------+--------+--------------+ | 60 | 0.0001 | 1000 | +------------+--------+--------------+
RuntimeError: CUDA out of memory. Tried to allocate 118.00 MiB (GPU 0; 4.00 GiB total capacity; 2.87 GiB already allocated; 10.61 MiB free; 32.29 MiB cached)
how can i solve this error ?
not enough video memory. Reduce the Batch size
Thank @vlomme
First of all, thank you for sharing the open-source of Multi-Tacotron-Voice-Cloning. I also just started learning about natural language processing programming. And I also started learning Python programming. -I put the software in the directory: D: \ SV2TTS -I put the dataset in the directory: D: \ Datasets, I have D: \ Datasets \ book and D: \ Datasets \ LibriSpeech
When using the code you provided, I had some training issues:
My question: How can I fix this problem?
Thanks again for your sharing!!!