huggingface / dataspeech

MIT License
313 stars 48 forks source link

work on other languages #16

Open taalua opened 7 months ago

taalua commented 7 months ago

Hi,

For fine-tuning the current model to other languages, is it better to use the existing trained model and prompt tokenizer "parler-tts/parler_tts_mini_v0.1" or maybe it better train from scratch with a custom tokenizer? Any suggestions for the multilingual tokenizer if using espeak-ng? Thank you for your insights.

ylacombe commented 6 months ago

Hey @taalua, it depends on the languages you want to fine-tune on! If the flan T5 tokenizer covers your language (say Spanish or French), you can fine-tune the existing model, otherwise you probably need another custom tokenizer or one suited for multilinguality (say mt5 or something) and to train your model from scratch!

thorstenMueller commented 1 month ago

Hi @ylacombe , congrats for your impressive work 👏.

I created a german "Thorsten-Voice" dataset on Huggingface to be used for a Parler TTS training (https://huggingface.co/datasets/Thorsten-Voice/TV-44kHz-Full).

Right now i'm on my first step with "dataspeech" and ask myself if i have to or can simply adjust this code or have to switch to another phonemizer like "phonemizer" to support my work on a pure german single speaker voice dataset.