Closed erogol closed 3 years ago
Any ELI5 tutorial/doc for creating a dataset for your own language/dialect?
Not sure if it is ELI5, but there is this link https://github.com/coqui-ai/TTS/wiki/What-makes-a-good-TTS-dataset
Also, @thorstenMueller has created a TTS dataset from the gecko so he might have valuable comments if you have specific questions.
Feel free to ask specific question. I'd happy to share my experiences on recording a new dataset here.
Hi @erogol , thank you for the amazing work, from Mozilla TTS to coqui-ai. Although Mozilla seemed perfect to me as it had wider community reach, just hope this grows even wider and faster than Mozilla. I am planning to share my models for Spanish and Italian using (Taco2 600k steps + WaveRNN). Audio quality seems to be good but I need to train it a bit more and also ask dataset providers if that would be okay if I make the models public. Fingers crossed.
Let me know if I can contribute in any way I have Google Colab Pro resources laying around free.
+-----------------------------------------------------------------------------+ | NVIDIA-SMI 460.67 Driver Version: 460.32.03 CUDA Version: 11.2 | |-------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |===============================+======================+======================| | 0 Tesla V100-SXM2... Off | 00000000:00:04.0 Off | 0 | | N/A 35C P0 24W / 300W | 0MiB / 16160MiB | 0% Default | | | | N/A | +-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=============================================================================| | No running processes found | +-----------------------------------------------------------------------------+
@Sadam1195 thx for the amazing work :rocket::rocket:.
I really hope we can include your models, of course with the right attribution going to you.
Just waiting for your signal.
For general contribution, this is a nice place to start https://github.com/coqui-ai/TTS/blob/main/CONTRIBUTING.md
If you just like to train models, let me know we can also find new datasets to attack.
I really hope we can include your models, of course with the right attribution going to you. I hope they allow me, otherwise I would see it as wasting my time and effort. Just waiting for your signal. I will let you know when I get the confirmation. If you just like to train models, let me know we can also find new datasets to attack. Training models on colab can be a bit annoying as sessions often get disconnected even with all the tricks in the book.
Nonetheless, I would love to train model on new datasets (if you have any) specially in the languages in which TTS models haven't been made public yet.
Hello,
I've just started to train a public domain Japanese dataset https://github.com/kaiidams/Kokoro-Speech-Dataset with Tacotron 2 of the latest master of https://github.com/mozilla/TTS on Google Colab Free. After 19K steps, I can hear what he says, although it is metallic.
To proceed, I'd like to know which branch and repo do you recommend for me to use? https://github.com/erogol/TTS_recipes seems a bit old.
To proceed, I'd like to know which branch and repo do you recommend for me to use? https://github.com/erogol/TTS_recipes seems a bit old.
Please use this https://github.com/coqui-ai/TTS instead of https://github.com/mozilla/TTS and use the latest main branch. @kaiidams
@Sadam1195 @erogol
I trained Tacotron 2 for 130K steps with this code https://github.com/kaiidams/TTS/tree/kaiidams/kokoro which was forked from the latest main. https://drive.google.com/drive/folders/1-1_HB-ogmvD-qYaHm8D5Xp1pWq9HKhB_?usp=sharing The included sample.wav was generated with vocoder_models/universal/libri-tts/wavegrad.
The input of the model is Romanized Japanese text. It requires some dependencies like MeCab to convert texts from ordinary ones. The dataset is the public domain and the reader knows about the dataset. I think I can provide Python code for text conversion.
@kaiidams if you can send a PR for text conversion something similar to the Chinese API we have, with the model, would be a great contribution.
Feel free to ask specific question. I'd happy to share my experiences on recording a new dataset here.
- Find/Create a text corpus to record (one sentence = 1 recording)
- Replace numbers to text
- Create csv file from corpus
- Check Mimic-Recording-Studio from Mycroft as recording environment (https://github.com/MycroftAI/mimic-recording-studio)
Start recording
- Constant speed while recordings
- Speak all chars clearly
- Speak in neutral voice
- Use good microphone equipment
- Find a recording place without random noise
Any reason why this and this isn't in the readme? I had to look up training to reach here
Hi @zubairahmed-ai. Here's a talk a made on how to record a voice dataset if that's helpful for you.
@thorstenMueller Perfect timing, thank you
Oh just realized this talk happened during recent Google I/O and I somehow didn't catch it while watching other videos :)
@thorstenMueller Thanks so much for the great video explaining your process in details with some tips. I'll make sure I follow that, do you plan to give a try to other models besides Tacotron-2? like Align-TTS?
You're welcome @zubairahmed-ai :-). I'm currently finishing some recording stuff for my emotional dataset and train a Fullband-MelGAN vocoder. So i've no time left to look at other models like Align-TTS. But feel free to train a "Thorsten" model with Align-TTS ;-).
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.
Asking people to share their models can also be added to the CONTRIBUTING.md, since it is asking for contributions. I'd be up to doing that, if no one has taken it up yet?
yeah good point. Feel free to take on it.
I would like to contribute my own model.. but I stuck in middle.. I have created dataset(LJSpeech) of my own voice . For training my model I need config.json file , so can anyone provide me the template of config.json file for LJSpeech dataset format required to train my model.
Thanks in Advance
@ManoBharathi93 you can start from the LJSpeech recipes in the recipes
folder and change the config fields for your dataset specs. You can find more info here https://tts.readthedocs.io/en/latest/
@erogol thanks a lot sir
Hello folks, How can I add drop-down Menu to list available models(downloaded models) in WEB-UI and when I change the server.py file the web interface is not changing ? please mention which file name want to make changes impact in WEB-UI..
I'd like to share a Tacotron2-DCA model and a Univnet model I trained on the Nancy corpus.
Here is a sample:
The link to the models: https://drive.google.com/drive/folders/1bMNOjjYxcCkgwkcYAlsPR3qM4hZQzAOR?usp=sharing
Thanks again for the great work!
@godspirit00 the quality is awesome.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.
Please consider sharing your pre-trained models in any language (If the licences allow that).
We can include them in our model catalogue for public use by attributing your name (website, company etc.).
That would enable more people to experiment together and coordinate, instead of individual efforts to achieve similar goals.
That is also a chance to make your work more visible.
You can share in two ways;
Models are served under
.models.json
file and any model is available undertts
CLI or Server end points. More details...(previously mozilla/TTS#395)