huggingface / parler-tts

Inference and training library for high-quality TTS models.
Apache License 2.0
2.6k stars 265 forks source link

Need the abillity to save/re-use a generated voice #14

Open rmangino opened 1 month ago

rmangino commented 1 month ago

We use TTS in an eLearning environment where we generate hundreds of videos per year. All of these videos must use the same exact voice for consistency.

To use Parler-TTS I'd need to be able to generate a voice (based upon a description), save it, then use it across multiple TTS sessions. We currently use Google's TTS api which allows me to select from a list of voices so that all of my TTS audio sounds exactly like the same speaker.

janewu77 commented 1 month ago

I'm also curious about how to maintain the consistency of the generated voice.

shuaijiang commented 1 month ago

Parler-TTS generate a similar but different voice with same discription but different Transcript text

juangea commented 1 month ago

For this to be useful we need to be able to select the voice, for example if I have a long video that I want to dub with thism, without being able to generate the exact same voice for all the text this is useless I'm afraid.

sanchit-gandhi commented 1 month ago

Thanks for the feedback all! Cross-posting a response from @ylacombe: https://huggingface.co/parler-tts/parler_tts_mini_v0.1/discussions/7#661fda86994005b654b417a4

In short, you can fine-tune Parler-TTS on a single speaker with as little as 30h of data. In doing so, you can fix the voice to the single speaker, while still maintaining the text description control.

As mentioned, we'll explore more voice control (e.g. through voice prompting) for the v1 release.

Jefferderp commented 4 weeks ago

Forgive the amateur question, but is Parler-TTS deterministic at all? Does each iteration have a seed associated? If so, could we potentially invoke with that same seed to gain more consistency between runs?