-
Hey @r9y9
I found your repository https://github.com/r9y9/wavenet_vocoder after doing some searching online for a starting point for getting something like Respeecher has (https://www.respeecher.c…
-
- News
- ICLR 2022 익명해제! https://openreview.net/group?id=ICLR.cc/2022/Conference
- KDD 2022, COLT 2022 모두들 파이팅!
- [AI를 활용한 생산성 증가의 시대의 도래 와 대비](https://bdtechtalks.com/2022/01/31/ai-productiv…
-
Hi,
In the paper the WaveRNN network is used as the neural vocoder. Any specific reason that it is replaced by Hifigan in this repository?
Thanks!
-
Dear Team,
I am trying to learn possibility of ESPnet2 TTS usage in the modest computer devices like PC, notebook and others.
Currently I am testing it in my notebook:
LENOVO B590
Intel(R) Celer…
-
- News
- Arxiv
-
I am training an universal waveRNN with >900 speakers. I aim to release this model to the community which hopefully solves the vocoder dependence of TTS solutions in general.
I am using https://gi…
-
Please, I want to know if there is a pre-trained model in the Sapnish or German language. I was creating an implementation but I need this two languages for make it more accessible in my country. In o…
-
Hi, I trained multi-speaker TTS with around 100 users using template of "espnet/egs/Libritts" with non-English language. Before I successfully trained and tested non-English single speaker. During m…
-
比如根据Ljspeech英语数据集训练出来的waveflow-vocoder,输入英语wav的mel文件,会生成十分相似的wav。但是如果输入其他语言的mel波形,比如日语,或者中文的mel波形,则可能无法合成正常的wav,甚至只能合成噪声文件。因此需要根据特定语言训练特定的vocoder,是这样的么?
-
Hi,
I am seeing the warnings with the recent changes (https://github.com/kan-bayashi/ParallelWaveGAN/pull/285) when loading a pre-trained model. Relevant code:
https://github.com/kan-bayashi/Paral…