-
Working on it!
-
Training on Russian dataset, output says I have less than 0.2 loss, the default is 500 epoch.
Now I'm on 1333 epoch and still get _Warning! Reached max decoder steps_
Should I counting or it screwed…
-
Hello,this project is so nice and thank you for your share!
I've train English and Chinese model with a total of hundreds of speakers in each language using LibriTTS and thchs30(Chinese dataset) and…
-
the Carnegie-Mellon pronunciation guide has been added to `./data-raw`; make a function to add it to dataframes
- to lemmas?
- to tokens?
- to both?
convert Carnegie-Mellon to IPA
-
The direct output mels from TransformerTTS seem to be incompatible with input for HifiGAN.
I was able to make it work by applying the following patch on HifiGAN (please ignore the prints for debug)
…
-
Hi. I want to implement real-time TTS demo with my model.
(link : https://colab.research.google.com/github/espnet/notebook/blob/master/espnet2_tts_realtime_demo.ipynb#scrollTo=J-Bvca5mE7bT)
But in…
-
Although CMUDict setup doesn't raise an exception, I tried it with other dataset and I believe there is a bug in `seq2seq_model.py` `get_batch(self, data, bucket_id=None)` method. Specifically I belie…
-
Hey,
I'm trying to run a training with Tacotron 1 using GST. I get the error on the first batch already.
Pytorch version: 1.8 and 1.7.1 (both yielded the same error)
Python version: 3.8.0
`T…
-
Hey you guy, Fastspeech and Tacotron achieved good results on a single language, but how about a voice synthesis of one speaker for many languages (>=2)? It is so obvious that we could create a datase…
-
Korean TTS now available, thank Jaehyoung Kim (@crux153) for his support :D. The model used KSS dataset here (https://www.kaggle.com/bryanpark/korean-single-speaker-speech-dataset). Thank @Kyubyong fo…