-
Finally I got the _python -m multiproc train.py etc..._ to work. Simple question: How do I now synthesize audio form a specific checkpoint?
In github.com/keithito/tacotron it was pretty simple: _p…
-
can tacotron 2 be taught to sing?
can tacotron 2 be taught to read a second line of text - emotion? ie angry etc?
-
![step-30000-align](https://user-images.githubusercontent.com/6031938/38714938-2c8d73e4-3f0b-11e8-988e-31534924f98c.png)
![step-30000-pred-mel-spectrogram](https://user-images.githubusercontent.com/6…
-
hi r9y9 thank you for sharing this wonderful program.
I downloaded your pre-trained model and try to synthesize by typing this
python synthesis.py checkpoint/lj_check.pth generated/test_awb --condit…
-
Currently, only the MoL is supported for parametric output distribution, but Gaussian output should be pretty easy to implement. I will make some time to support Gaussian perhaps this weekend.
-
I was trying almost a week to use waveglow or wavenet as vocoder, but it only generates pure noise. If I use griff-lim to generate the audios based on the mel-spectrogram, and I could get pretty good …
-
Any plan for [WORLD vocoder](https://github.com/mmorise/World) for Multi-Speaker TTS
-
Hi - I was wondering if anyone has attempted to use the world vocoder features as part of a wavenet implementation. Thus instead of vocoding through pyworld (as is done in the tutorials here) the voco…
-
I hope anyone can help me with a problem I got while trying to run both DeepVoice3 and Wavenet systems.
When I run TTS with Deepvoice3 on LJSpeech, I got a robotic sound.
I know that Wavenet can p…
-
First, thank you very much @r9y9 and everyone for the great work!
Does anyone want to share pre-trained weights that sound good?
Particularly for LJSpeech if possible. My training is to be conve…