Closed erogol closed 1 year ago
great project! Excited to see this growing!
I'm learning the code/API and performing experiments. I hope to contribute soon.
I'm also wondering if I can donate (money) to Coqui?
I'm learning the code/API and performing experiments. I hope to contribute soon.
I'm also wondering if I can donate (money) to Coqui?
Wow! Thanks! Humbling.
We were setting up GitHub sponsors, but the tax implications were onerous.
We're currently exploring Patreon. So stay tuned!
@erogol Thanks for sharing the plans!
Do you have any thoughts (or need help to) simplifying the dependencies a bit? I'm thinking that if TTS
is used as a lib installed over pip
it might be nice to remove visualisation dependencies only used in notebooks, removing test/dev dependencies and moving e.g. tensorflow
into extras to reduce the footprint. Personally would love to use this as a dependency rather than maintaining my own fork.
@agrinh Why do you need to keep your own fork exactly? It'd be better to expand the conversation on gitter if you like.
@agrinh Why do you need to keep your own fork exactly? It'd be better to expand the conversation on gitter if you like.
Wow, thanks for the super fast reply. Sure, we can move the discussion to gitter.
Please add DC-TTS to the the list of models.
DC-TTS implementation available with MIT Licence code available here EFFICIENTLY TRAINABLE TEXT-TO-SPEECH SYSTEM BASED ON DEEP CONVOLUTIONAL NETWORKS WITH GUIDED ATTENTION paper @erogol
What were you thinking about the "TensorFlow run-time for training models"? Like giving the user the option of using TensorFlow or PyTorch? I wouldn't mind taking a stab at the TensorFlow part.
@will-rice the plan is to mirror what we have in torch to TF as much as possible. It'd be great if you initiate the work
Are you guys planning to develop some expressive TTS architectures? I'm currently studying this topic and planning to implement some of them based on Coqui, part of them just controlling latent space using GST Kwon et al 2020 or RE Sorin et al 2020, and others that actually changes the architecture by adding VAE, normalizing flows and gradient reversal
@lucashueda Capacitron VAE: https://github.com/coqui-ai/TTS/pull/510
@lucashueda Capacitron VAE: #510
Oh nice, hope to see Capacitron integrated soon. So maybe, in the future I'll be able to contribute with some others expressive architectures
@erogol Look forward to new End-to-End models being implemented, specfically Efficient-TTS! if the paper is accurate, it should blow most 2 stage configurations out of the water, considering it seems to have higher MOS than tacotron2+hifigan, while also seeming to have significantly faster speed than glowtts+fastest vocoder! I have not seen a single repo replicating the EFTS-Wav architecture described in the paper released 10 months ago, it would be amazing to see it in Coqui first!
@BillyBobQuebec I don't think I will implement these models anytime soon. But as they stand, contributions are welcome
@BillyBobQuebec but you can try VITS which is close to what you're describing :)
@BillyBobQuebec but you can try VITS which is close to what you're describing :)
Agreed, I am currently trying VITS actually, I have some issues training with the coqui implementation unfortunately, I've posted the issue about the bug today and hope I can get it resolved.
Hi there! Thanks for your great work! I'm looking forward to training YourTTS on other languages. Will training and fine-tuning code of YourTTS be published soon? I would be very grateful if you could tell me an approximate time~ Have a nice day :-D
Hello, thanks for great works! I'm a fan of Coqui TTS
.
I'm porting some of the stuffs in the project to the Rust
for the following reasons.
The VC
in the YourTTS
has been successfully implemented. And for this purpose, an example of saving/loading a pretrained Vits
model has been added in the repo. I write it on Milestones PR because I think my work can be helpful to others :)
@kerryeon great work!! Thanks for sharing!
Any plan to a port of coqui-ai engine for android? TTS on android is very robotic (espeak, rhvoice, festival lite).
No immediate plans on that
Thumbs up for planning ONNX support. Hope it gets prioritized more!
@Darth-Carrotpie what is your use-case of ONNX? (Just want to get some feedback)
@Darth-Carrotpie what is your use-case of ONNX? (Just want to get some feedback)
Personally, for me it sounds like a good way to develop Windows nativ TTS applications without needing a Python runtime and/or the big dependencies like pytorch.
I tried exporting the VITS model to onnx before, but didn't succeed. There are also other obstacles beside executing the model, like phonemization. ^^
Currently I am using pythonnet to embed the required python functions directly in my C# code. For Python I use the embedded version to make the App distributable.
@erogol I am trying to run models in Unity. It's environment is in C#, .NET Standard 2.1. Having a universal format model also means in the long run I can not only run models in OS agnostic manner. Of course things like tokenization and phonemization are additional hurdles, but if there are open source examples it's quite doable. For models needing tokenizers I've been using BlingFire succesfully, so I reckon there's similar phonemizer helpers / libraries for other languages beside python, including C#. Edit: things that embed python into C#, like pythonnet are convenient, though quite slow. In my case, where I have multiple models loaded and running at the same time (i.e. ~10) means that needless interpreter overhead can become a critical bottleneck. Plus it might add unforeseen debugging issues.
@Darth-Carrotpie run in unity means in the code or integrate it to Unity editor?
Also better to move this to a separate post under the Discussions
@Darth-Carrotpie run in unity means in the code or integrate it to Unity editor?
Also better to move this to a separate post under the Discussions
Created a topic on ONNX at Discussions: https://github.com/coqui-ai/TTS/discussions/1479
Is there a flutter package for using this TTS library? Might be an easy way to get this for use in real-world applications.
I am also very new to development but will like to contribute to this project. Can I work under someone?
@desh-woes there is no flutter package, unfortunately.
Can you DM me on Gitter or Element (out chat rooms) if you're willing to work on a particular thing?
how train model using word embedding as input
@omkarade no support for that yet.
I want to train a custom Your TTS model on my data set. Can you please share me detailed process.
I want to train a custom Your TTS model on my data set. Can you please share me detailed process.
You can read the relevant documentation here: https://tts.readthedocs.io/en/latest/finetuning.html Also this is the roadmap thread, please ask for support here or open a new discussion/issue
Looking forward to the SSML implementation!
@erogol is the NaturalSpeech paper something you'd think about implementing I could take a crack at it.
@Kthulu120 sure thing. Feel free to shoot a PR. We are always here to help.
Will there be a C API to this library like your STT library?
Not in the roadmap currently
@JediMaster25 you think the Roadmap is the right place for this convo?
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.
It's good to see progress on a propper tts project. I'm running arch and no cuda and I am gonna see if I can convince it to use my cpu instead! What would be really cool would be if this could work on AVX512 in amd chipsets.
Hi thanks for delightful codes! I want to use this version of TTS on raspberry pi 4, but I think this version does not support real time processing. Are there TF utilities provided as in Mozilla TTS to convert trained models to tf-lite? Can the strategy of quantization work here for real-time processing? I need some roadmaps in this regard.
Thanks Neda
Thank you for your great work for TTS.
Is there any progress on Let the user pass a custom text cleaner function.
?
If it's possible, I want to pass my own Korean cleaners.
You can currently do it by creating your own tokenizer or overloading the class.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.
Marvelous project. Any ways to donate to core contributors? I would prefer to use paypal.
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels.
@MaxIakovliev you can use https://coqui.ai/ :)
This roadmap issue is quite outdated. I'll keep it open to keep the references to some of the issues and models we like to tackle but won't be updating until one day officially becomes 48 hours.
Any update regarding SSML implementation?
These are the main dev plans for :frog: TTS.
If you want to contribute to :frog: TTS and don't know where to start you can pick one here and start with our Contribution Guideline. We're also always here to help.
Feel free to pick one or suggest a new one.
Contributions are always welcome :muscle: .
v0.1.0 Milestones
Synthesizer
interface onCLI
orServer
.v0.2.0 Milestones
v0.3.0 Milestones
v0.4.0 Milestones
TTS.tts
models.v0.5.0 Milestones
v0.6.0 Milestones
v0.7.0 Milestones
v0.8.0 Milestones
πββοΈ Milestones along the way
π€ New TTS models