-
-
Hi,
We are trying to train a multi-speaker model starting from the LibriTTS data and using the latest FastPitch commit. We selected the 50 speakers which have the most utterances in the dataset, an…
-
Hi all, first thanks @Rayhane-mamah for fixing bugs in wavenet vocoder and making it fully work now :) I've spent several days looking into its implementation and there's a part that really makes me s…
-
**Submitting author:** @souzatharsis (Thársis T. P. Souza)
**Repository:** https://github.com/souzatharsis/podcastfy
**Branch with paper.md** (empty if default branch):
**Version:** v0.2.17
**Editor:…
-
How do you generate the .npy alignments from the audio files?
-
## Contents
@Hiroshiba
According to #498 , it might be the time to take English into consideration.
I suggest we start with the corpus [LJSpeech](https://keithito.com/LJ-Speech-Dataset/), si…
-
Determine how to tackle the following tasks. It might be helpful to involve SMEs in this early phase to vet the improvisation segments that we're considering. The SMEs will be involved in assessing …
-
As rafaelvalle mentioned here https://github.com/NVIDIA/tacotron2/issues/336#issuecomment-649724985 ; the dropout caused Tacotron model to "say the same phrase in multiple ways". In theory, this is a …
-
## 🐛 Description
i got the UserWarning when i try to training tacotron2 with ljspeech. the warning is as below:
```
/media/DATA-2/TTS/coqui/TTS/TTS/tts/models/tacotron2.py:341: UserWarning: __flo…
-
**🚀 Feature Description**
Does not exist an italian TTS.
**Solution**
I trained some models using [male dataset](https://huggingface.co/datasets/z-uo/male-LJSpeech-italian) adn [female dataset](h…