-
### Describe the bug
Trying to get TTS to do a countdown, but it seems to run forever, when a similar prompt seems to run in a reasonable time
Works as expected:
`
tts --text "How is the weather…
-
Hi,
I would hope for the option to reduce the verbosity (or mute all) output messages when initializing TTS.
Just some output when I initialize:
```
> tts_models/en/ljspeech/tacotron2-DDC is a…
-
first, thanks for such a complete pipeline.
second, would you integrate e.g. [this repo](https://github.com/alphacep/tn2-wg) for native russian support?
-
Hi,
I've been looking at the PyTorch/SpeechSynthesis/Tacotron2 model in this repo, and I trained first with one of the subsets of the LJSpeech-1.1 to test whether my setup would work to train the m…
-
1.Do we need to train LPCNET with LJSpeech dataset or 16k-LP7?
2.DO we need to train both LPCNET and tacatron2 with same dataset?
3.Do we need to Tacatron-2/preprocess.py or just use
./header_remov…
-
Related to **Model/Framework(s)**
https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/Tacotron2
**Describe the bug**
The model cannot train because it hits the f…
-
Why does it ask me for a .npy file if step 6 is precisely to convert the .wav into Mel's spectrograms and check the files?
"This cell automatically converts .wav to .npy, which will be needed for voi…
-
I want to use this code for a low resource language(pashto) so guide where I should start and what to do I am beginner in programming.
Thanks in advance
-
Hi,
I want to try fastspeech on different dataset. therefore, can you share how to extract alignment from tacotron2?
I tried this code, but get bad result for synthesis when inference long sent…
-
In case of vocoding one example, by folding the input example into batch of chunks, the inference can run faster.
https://github.com/pytorch/audio/blob/31dbb7540c78fe5d176948764cf9a20f55ac80dc/exam…