Open rmangino opened 7 months ago
I'm also curious about how to maintain the consistency of the generated voice.
Parler-TTS generate a similar but different voice with same discription but different Transcript text
For this to be useful we need to be able to select the voice, for example if I have a long video that I want to dub with thism, without being able to generate the exact same voice for all the text this is useless I'm afraid.
Thanks for the feedback all! Cross-posting a response from @ylacombe: https://huggingface.co/parler-tts/parler_tts_mini_v0.1/discussions/7#661fda86994005b654b417a4
In short, you can fine-tune Parler-TTS on a single speaker with as little as 30h of data. In doing so, you can fix the voice to the single speaker, while still maintaining the text description control.
As mentioned, we'll explore more voice control (e.g. through voice prompting) for the v1 release.
Forgive the amateur question, but is Parler-TTS deterministic at all? Does each iteration have a seed associated? If so, could we potentially invoke with that same seed to gain more consistency between runs?
https://github.com/huggingface/parler-tts/pull/110
Does this help? I have a notebook demonstrating how to try and maintain voice consistency.
We use TTS in an eLearning environment where we generate hundreds of videos per year. All of these videos must use the same exact voice for consistency.
To use Parler-TTS I'd need to be able to generate a voice (based upon a description), save it, then use it across multiple TTS sessions. We currently use Google's TTS api which allows me to select from a list of voices so that all of my TTS audio sounds exactly like the same speaker.