-
Epoch 625: : 2000batch [31:38, 2.59batch/s, batch_size=10, lr=0.0004, mel=0.0152, step=9999]
==============
Epoch 624 ended. Steps: 9999. {'total_loss': 0.033, 'mel': 0.033, 'batch_size': 9.8125, …
-
Hi Authors,
Great work, I'm sure to use this going forward. I have a question however, given the encoder and vocoder, what hyperparameters are affected when the samplerate is changed? I'm looking t…
-
# Crash Description
The following exception is thrown in `train.log`. Seems that the training program crashed because the wav length of training data is too short. (I set `remove_short_samples` to …
-
I would like to use one of the pretrained models that use x-vectors to synthesize speech for a speaker not in the training set. From what I understand from other discussions, x-vectors work better tha…
-
I am trying to replicate the results I obtained from a training I have already completed on both ubuntu and windows (the code works on GPU), and I also tried the code using CPU and it works properly (…
-
I have a general question regarding TTS that can pronounce acronyms like NFL, NIH, etc, and abbreviations like NASA, NATO, AIDS. I searched in the help and issues and couldn't find anything relevant. …
-
Hi I'm facing the following error while running the espnet/egs2/ljspeech/tts1 model for hindi language with g2p = espeak_ng_hindi and language as hi. I have installed phonemizer but still it is giving…
-
Hello,I want to train a HIFIGan use my own dataset with your recipe. Could you tell me how to do that? I mean that whether there is a command like `parallel-wavegan-train` to run the train script.
T…
-
I have read the [TTS training guide](https://github.com/espnet/espnet/blob/master/egs2/TEMPLATE/tts1/README.md) and i follow [this issue](https://github.com/espnet/espnet/issues/2712).
Now i want t…
-
`NNSVS` now supports `Parallel WaveGAN` and `NSF`.
Is it possible to use these vocoders on `ENUNU`?
# Related pages
- https://github.com/nnsvs/nnsvs/blob/master/docs/train_vocoders.rst
- https…