-
-
Hi,
When I tried to run the egs/ljspeech/tts1 recipe and decode the trained model, I met the problem that the output of tts_decode is nearly empty, which makes the following synthesis also emp…
-
I run preprocessor with my own symbols which is shorter than LJSpeech symbols, and trained Fastspeech2 model with this new symbols, but the audios synthesized by this model are not good.
If I didn't…
-
Do I have to train Tacotron2 module before training Fastspeech2?
In other words, is there any way to extract duration without Tacotron2 training?
-
如果输入的文字没有标点符号结尾,那么生成的pred_mel.shape就会**恒等于**(40000, 80),最后导致程序一直卡在wav合成的[这一步](https://github.com/lturing/tacotronv2_wavernn_chinese/blob/9cdcb94e2eaa0f6e4dadea3a3bf6a80f8fdee74c/tacotron_synthesize.py…
-
Hello, I find an interested paper(https://arxiv.org/pdf/2006.04558.pdf) called FastSpeech2.
It is an advanced version of FastSpeech, which eliminates the teacher model and directly combines PWG trai…
-
Hi,
I'm trying to use the Libritts AutoProcessor for inference on my FastSpeech2 Model.
```python
processor = AutoProcessor.from_pretrained(
pretrained_path="../../tensorflow_tts/processor…
-
It is really an astonishing large project.
I have seen that there is multi-speaker support in the preprocessing scripts and model configs. It will be great if anyone can share the multi-speaker aud…
-
Currently, after trying to create a way to inference the fastspeech2 model the need for tf to still exist is a nuisance.
```python
def prepare_input(self,input_ids):
input_ids = tf.expand_d…
-
Hello, thank you for your work. I have a question, what is the purpose of masking in your work? As I assume, this is not for hiding future steps, due to FastSpeech2 is non-autoregressive.