-
Korean TTS now available, thank Jaehyoung Kim (@crux153) for his support :D. The model used KSS dataset here (https://www.kaggle.com/bryanpark/korean-single-speaker-speech-dataset). Thank @Kyubyong fo…
-
Hi, this one's probably for @mapledxf and is a duplicate of this issue: https://github.com/TensorSpeech/TensorFlowTTS/issues/274
I've converted my FastSpeech2 model to a TFLite model in the way sho…
-
I ran `examples/fastspeech2_libritts/libri_experiment/prepare_libri.ipynb` to prepare the LibriTTS dataset, but have noticed clips that exceed 11 seconds (I have `max_file_len` set as 11 seconds) in t…
-
Hi, When I tried to replace the ljspeech tflite model with a new tflite model, I encountered this problem:
`com.example.myapplication E/InputWorker: Exception:
java.lang.IllegalArgumentExcepti…
-
I'm currently trying to train with a new speaker dataset. Do you know how to utilize MFA to get Textgrid for Korean? It seems like we need a lexicon file like here for librispeech: http://www.openslr.…
-
Hi Dears,
Is it possible to synthesize waves from only tacotron2 models (or FastSpecch2)? or we must train a vocoder?
best regards
-
It can be hard to test mel-generated from models without universal vocoder.
You can pin me to this issue when I've got some free time I'll check https://github.com/kan-bayashi/ParallelWaveGAN and s…
-
I have run "fix_mismatch.py"
I checked all the duration files with sum(duration) == len(mel)
root@275f4d711e01:/workspace# python examples/fastspeech2/train_fastspeech2.py --train-dir ./dump_ljspe…
-
i have already created durations with MFA, and also ran well two preprocess script(tensorflow-tts-preprocess, tensorflow-tts-normalize) with no error. but when i ran the train script, there is an erro…
-
Hi,
I have successfully trained one model and used it with Kaldi. I trained a second model with a different dataset, and now it's unable to launch as it crashes opening the mfcc.conf file.
Here'…