Closed tailname closed 3 years ago
Thank you. I set the maximum values for 6 gigabytes of GPU memory
tacotron_batch_size=20
Reduce the batch size:
one question. Do you know how much trained synthesizer and vocoder are ?
Depends on the quality of training data. The number of steps of the pretrained models is a good target. (278k synthesizer, 428k vocoder) There is a learning curve to training. I recommend getting some experience with a proven dataset like LibriSpeech before changing the language.
hello. Please help me, I do not know how to solve my problem problem. I run and completed without errors
python synthesizer_preprocess_audio.py <datasets_root>
python synthesizer_preprocess_embeds.py <datasets_root>/SV2TTS/synthesizer
but after typingpython synthesizer_train.py my_run <datasets_root>/SV2TTS/synthesizer
shows me a long errorI think it can't use the memory of my GTX 1660 super .Tell the noob what to do