-
안녕하세요? 질문 드립니다.
지금 이 프로젝트(Tacotron-Wavenet-Korean-TTS)와 Tacotron-Wavenet-Vocoder의 차이점은 무엇인가요?
그리고 이 프로젝트 hparams.py 에는 wavenet_batch_size가 2, Tacotron-Wavenet-Vocoder의 hparams.py의 wavenet_batch…
-
Hi everyone, I'm a new member of the group. Glad to have read the detailed instructions in the README and the previous discussions. I completed the voice training after 3 steps:
- Step 1: train with …
-
Hi guys,
I'm trying to train a speech encoder whose output was similar to Tacotron2's encoder output with teacher-student training. So after it is trained, I can have a speech encoder whose input i…
-
由于之前一直关注Real-Time-Voice-Cloning(https://github.com/CorentinJ/Real-Time-Voice-Cloning) 这个项目,这次Mocking Bird项目没有使用Tacotron2太令人可惜了。所以自己斗胆将Tacotron2迁到这个系统中,比较粗糙但是可以成功训练和推理。
![image](https://user-images.gi…
-
Hello, I have trained my tacotron2 model successfully on around 2000+ audio files. However, while inferencing the audio output through Waveglow the audio is not clear. There is too much noise. Where a…
-
I'm a noob to tacotron2 and i'm running on a cpu so maybe that's why i'm getting these errors. can someone help me solve this? i installed different versions of tensorflow, look online for hours and c…
-
Hey @Rongjiehuang,
Thanks a lot for open-sourcing the checkpoint for the FastDiff vocoder for LJSpeech!
I played around with the code a bit and I'm only getting quite noisy generations when dec…
-
firsr, thank you for your excellent work.
And I have a question about how to use phonemes to train models? not in other works, only in this tacotron2.
-
Thank you, very promising result.
Dataset: arctic only
Parameters: everything is default
Training time: 14 hours
![untitled](https://user-images.githubusercontent.com/3985740/27171201-a9b7d55e-5…
-
Given that this repo only takes care of converting TTS with Tacotron2. I thought it might be convenient to turn it into a distributable package (preferably a conda one) so that whenever I install it, …