theodorblackbird / lina-speech

Official implementation of the TTS model Lina-Speech
Other
137 stars 12 forks source link

Model Scaling, finetuning recipe #4

Open rishikksh20 opened 7 months ago

rishikksh20 commented 7 months ago

Hi @theodorblackbird Currently, the model sounds good but I think if you scale the model it will get better with picking prosody and timber from prompt and sounds much more natural. One suggest I can give is to scale the model around 300M parameters and train on the latest 10K hours of huggingface's TTS data : https://huggingface.co/datasets/parler-tts/mls-eng-10k-tags_tagged_10k_generated . https://github.com/huggingface/dataspeech I have tested this on VoiceCraft (https://github.com/jasonppy/VoiceCraft) which is also based on a delayed RVQ pattern, I trained 330M Voicecraft on 1k hours of multi-lingual data and it sounds amazing and very natural it has some noise and the voice is not that crisp but that due to lower sample rate of 16000.

theodorblackbird commented 7 months ago

Thank you ! I have some questions :

I'm considering scaling up to 130M and further, as soon as I have compute.

rishikksh20 commented 7 months ago

Hi @theodorblackbird I am finetuned VoiceCraft over the pre-trained model provided by the author. For multi-lingual data I mixed LibriTTS-r with in-house Hindi data to scale data up to 1k hours then I finetuned 330M model up to 100k and it gave good results.

rishikksh20 commented 7 months ago

@theodorblackbird Scaling up to 130M sounds good, Even I can train that small model if I have data module format and training recipe

rishikksh20 commented 6 months ago

Hows 130M model performing? Saw you update the readme with 130M but not able to grad checkpoint

theodorblackbird commented 6 months ago

Hey @rishikksh20 ! My bad I uploaded the models but the links expired. Anyway, everything is on huggingface now ! : https://huggingface.co/lina-speech/all-models/tree/main There is now a 130M GLA model on librilight medium + one fine tuned on LibriTTS-R. Trained for 300k steps only (clear underfitting) For the training recipes I'm translating my internal stuff to hf datasets framework and will share this asap, I'll ping you when it's ready.

rishikksh20 commented 5 months ago

@theodorblackbird Have you checked https://play.cartesia.ai/ , their TTS also based on State Space Model and their samples are amazing. May be they trained Large Model on very large audio dataset.

theodorblackbird commented 5 months ago

@rishikksh20 I did, it sounds really good. Wonder if they use only SSM or some kind of cross-attention. Right now I'm still working on a 350M scale up on this dataset you showed me : https://huggingface.co/datasets/parler-tts/mls-eng-10k-tags_tagged_10k_generated

rishikksh20 commented 5 months ago

ok, recently Mamba draw lots of attention from Speech domain

ScottishFold007 commented 5 months ago

@theodorblackbird Hi, may I train the TTS model on your own now with this repo? I'd like to run a multi-language version to try it out and see what the sound quality is like on a 44100 sample rate? I'll sync you the results of the experiment then.

ScottishFold007 commented 5 months ago

@rishikksh20 I did, it sounds really good. Wonder if they use only SSM or some kind of cross-attention. Right now I'm still working on a 350M scale up on this dataset you showed me : https://huggingface.co/datasets/parler-tts/mls-eng-10k-tags_tagged_10k_generated

Could you please explain how "audio_token", "align_token", and "text_token" are obtained in the LinaDataModule?