-
Hi,
I'm working on writing a paper related to neural vocoder.
In that paper, I wanna use multi-speaker TTS for testing vocoders on multi-speaker dataset(VCTK).
So, I will add experimental source…
-
Below code snippet is from E2E-TTS demo colab. When I load the vocoder, then results of the tacotron model changes. I think this may be something about pytorch, but I tried it in a different scenario.…
-
Sorry for a noob question, what is the best (in terms of quality) english TTS available pretrained for today? Is it a following combination or there is something better?
1. Tacotron2 | char_train_n…
qo4on updated
3 years ago
-
As said in the issue « pytorch synthesizer », i’m trying to retrain a synthesizer in tensorflow 2.x (the model is inspired from NVIDIA’s pytorch implementation and is available on my github)
Actual…
-
Hi,
I noticed the code as follows"
```
mel_spec = audio_processor.mel_spectrogram(wav_data)
audio, pred, error = audio_processor.lpc_audio(mel_spec, wav_data)
```
So the audio processor …
-
Hi, I see there is a pretrained checkpoints of ParallelWaveGAN which I have to download first before training, does that mean I have to train ParallelWaveGAN first if I use my old dataset? thanks.
-
Hi Dears,
Is it possible to synthesize waves from only tacotron2 models (or FastSpecch2)? or we must train a vocoder?
best regards
-
I couldn't run. Having the following error:
AttributeError: module 'numba' has no attribute 'jit'
-
Continue from #977
## Action items
- [ ] VAE-based Tacotron2 (e.g. [GMVAE-Tacotron2](https://arxiv.org/pdf/1810.07217.pdf))
- Tensorflow implementation (Thanks @rishikksh20)
https://git…
-
Thanks for your work !
I use `tts` melspectrogram output directly as the input of r9y9's [wavenet_vocoder](https://github.com/r9y9/wavenet_vocoder) pretrained model in order to get better quality, b…