-
Hi,
I traine my own Tacotron, Conformer FastSpeech 2 and HiFiGAN models, and now I would like to finetune CFS2 + HiFiGAN.
When following the instructions in egs2/TEMPLATE/tts1/README.md, I get t…
-
Hi there! I'm trying to run this project (BTW, I'm looking forward to it!) and when it gets time to start creating a wavenet training model via "python train.py", I experience this error. Any workarou…
-
Hi, I am getting the following error
`from espnet_onnx.espnet_onnx.export import TTSModelExport
`
-
When training FastSpeech2 (fastspeech2_v2) with phonetic alignments extracted from MFA I get the error described:
```
/content/TensorflowTTS/tensorflow_tts/trainers/base_trainer.py in run(self)
…
-
Hi. I am training my dataset with Tacotron 2 in 2 different machines. In the first one I am using Windows 10 and I haven't problem until now. But in the second one I am using ubuntu 20.04 and the trai…
-
If I train the hifi-gan vocoder using fine tunning approach, which uses Tacotron 2 to generate Mel in the first place. Can I use the regular glow-tts generated Mel with the above trained hifi-gan voco…
-
## 🐛 Description
Full log
```
$ tts --model_name "tts_models/zh-CN/baker/tacotron2-DDC-GST" --text "hello" --out_path "test.wav"
> Downloading model to /home/fijipants/.local/share/tts/tts_…
-
Win11
GPU:3060laptop
Python 3.9.13
+----------------+------------+---------------+------------------+
| Steps with r=2 | Batch Size | Learning Rate | Outputs/Step (r) |
+----------------+-…
-
Hello.
Thanks for the great repo.
I am trying to train a pre-trained fastspeech2 model(kss).
It seems that a duration is needed for learning, which can be extracted by training a tacotron.
I have …
-
Hello,
in default setting, the vocoders are trained on mel-spectra computed from the real speech signals. When they are fed by the Tacotron-generated spectra, the quality is a bit lower.
I would l…
t-dan updated
2 years ago