-
![step-37000-align](https://user-images.githubusercontent.com/298918/36618688-03b30a1c-18e4-11e8-8164-ff8cce59d68f.png)
This is my alignment after 37000 steps. Should it be better results by now?
…
-
Thanks for the code.
Can you please give info on the data used for training the pre-trained models, both for AutoVC and speaker embedding? If you trained on a subset of a larger database, please le…
-
whether it can run in realtime?
-
I have noticed that on the [tacotron2-work-in-progress](https://github.com/keithito/tacotron/blob/tacotron2-work-in-progress/hparams.py#L13) branch the `min_mel_freq` in `hparam.py` is set as 125Hz th…
-
with the same wavenet model and the same utterence(p225_001.wav), i found that the quality of the waveform generated from the mel-spectrogram in provided metadata.pkl is much better than the one gener…
nkcdy updated
4 years ago
-
Hi @adhamel,
Sorry to randomly open an issue in this repository, but I noticed your opened issue in r9y9's WaveNet repository and saw that you had trained his implementation on your own dataset. Is…
m-k-S updated
4 years ago
-
-
[merlin](https://github.com/CSTR-Edinburgh/merlin) is a DNN based TTS. The model is trained by [labeled speech data](http://www.festvox.org/cmu_arctic/).
merlin also use [festival](http://www.cstr.e…
-
## 2018/05/01
- Updated to be compatible with pytorch v0.4
- Updated to be able to use melspectrogram as auxiliary feature
Due to above update, some parts are changed (see below)
```
# ----…
-
I just recovery from hard disk failure, and using the HEAD of repo (was using May 15 [commit](https://github.com/Rayhane-mamah/Tacotron-2/commit/c1a4109430ed22047535d3ee6a0712a9920f33a7))
but the…