-
Hi,
I am really enjoying your API in conjunction with Alexa and other Smart Home devices and regularly use it for notifications and small announcements as well. While I am using Amazon Polly Vicki …
-
Hi,
I have trained a FS2 model and fine-tuned my MBMelgan model. Here is a sample of the speech produced in Python before exporting to TFLite: [Normal FS2 and MBMelgan](https://drive.google.com/fil…
-
Branch: Master ([540d811dd5](https://github.com/mozilla/TTS/tree/540d811dd52b5598a7cd21cbbcf197b0bfbeab62))
Hi,
trying to train a multi-speaker model using the current master branch with phonem…
-
C++ inference now supported (thanks @ZDisket for his dedicated support). It will be improve and support more models to adapt with main repo over the time :D. Let check it out :D
Code: https://gith…
-
Wouldn't model performance be better if we increase the number of model parameters?
Have you ever done an experiment like this?
-
I found the symbol in the original code has Phoneme. And When I use Phoneme as input to infer using the published checkpoint, it can work well. But I train the model for myself, and use Phoneme as inp…
-
#### What is your question?
I'm new to fairseq and am trying to train a simple LSTM-based model for a grapheme-to-phoneme conversion task, using a command similar to the one [here](https://github.c…
-
I use the version tf-nightly==2.5.0-dev20201029 and tested the tflite model
when I use the code below to test the performance of the fastspeech tflite model, the first input can be converted to aud…
-
If you use phonemizer with dates, numbers are converted as they are called but the pronunciation is different in the dataset (LJSpeech)
For instance if you say: "I came here in 1948 but I didn't li…
-
@bshall Thank you for this implementation. Can I use this repository as a universal vocoder? I want to train tacotron with vq-vae features. Will this work?