NVIDIA / flowtron

Flowtron is an auto-regressive flow-based generative network for text to speech synthesis with control over speech variation and style transfer
https://nv-adlr.github.io/Flowtron
Apache License 2.0
887 stars 177 forks source link

Request for clarification on some of the readme scripts. #141

Closed Jcwscience closed 2 years ago

Jcwscience commented 2 years ago

First off sorry if any of this sounds like a rant, I have been fighting with this project for many weeks and am tired.

Are the model links correct? None of the indicated Waveglow models produce any sound. And the scripts provided throw errors I can only trace to the pretrained models.

Does the script check if a speaker ID is valid by iterating through the training file list? I keep seeing this “speaker id” term tossed around but I cannot figure it out. Not only can I not find a list of valid ids for any of the pretrained models, half of the time the scripts say that the number of speakers is incorrect. Incorrect with reference to what? And how can I find the correct number.

Every model I try to train throws dictionary key errors for iterations, model, state_dict, epochs, just about everything —though not all at the same time—. I have tried all of the fixes I can find across the issue reports from this project along with FastSpeech FastPitch Tacotron2 and Waveglow. And every fix I try creates two or more issues to track down.

Assuming I only have an intermediate knowledge in this field, a file list with the corresponding wav files in the correct format and sample rate, otherwise working hardware which more than meets the requirements and a stubborn determination to keep fighting this until my eyes bleed, where should I start?

If anyone would be willing to help me out, I need to know which models are known to function with the script, which mode to use —fine tuning or warmstart— with a small dataset of maybe between 15 min to a couple hours if necessary, and which model —ljs vs libritts— would work best. If libritts works better, where do I find which speaker IDs are valid without brute forcing it over a few days.

Thanks in advance if anyone can help me out here! I would certainly appreciate it!

Bahm9919 commented 2 years ago

First off sorry if any of this sounds like a rant, I have been fighting with this project for many weeks and am tired.

Are the model links correct? None of the indicated Waveglow models produce any sound. And the scripts provided throw errors I can only trace to the pretrained models.

Does the script check if a speaker ID is valid by iterating through the training file list? I keep seeing this “speaker id” term tossed around but I cannot figure it out. Not only can I not find a list of valid ids for any of the pretrained models, half of the time the scripts say that the number of speakers is incorrect. Incorrect with reference to what? And how can I find the correct number.

Every model I try to train throws dictionary key errors for iterations, model, state_dict, epochs, just about everything —though not all at the same time—. I have tried all of the fixes I can find across the issue reports from this project along with FastSpeech FastPitch Tacotron2 and Waveglow. And every fix I try creates two or more issues to track down.

Assuming I only have an intermediate knowledge in this field, a file list with the corresponding wav files in the correct format and sample rate, otherwise working hardware which more than meets the requirements and a stubborn determination to keep fighting this until my eyes bleed, where should I start?

If anyone would be willing to help me out, I need to know which models are known to function with the script, which mode to use —fine tuning or warmstart— with a small dataset of maybe between 15 min to a couple hours if necessary, and which model —ljs vs libritts— would work best. If libritts works better, where do I find which speaker IDs are valid without brute forcing it over a few days.

Thanks in advance if anyone can help me out here! I would certainly appreciate it!

For solving problem with sound.

delete .half() in this two lines https://github.com/NVIDIA/flowtron/blob/701780103910522282336d9e014e59f345070145/inference.py#L82 https://github.com/NVIDIA/flowtron/blob/701780103910522282336d9e014e59f345070145/inference.py#L47

need to be: waveglow.cuda() audio = waveglow.infer(mels, sigma=0.8).float()

It helps me.

Jcwscience commented 2 years ago

@Bahm9919 It worked!! Now we’re in business. Thanks!

Bahm9919 commented 2 years ago

@Bahm9919 It worked!! Now we’re in business. Thanks!

For answering other questions need to know more about your data. is it the data for single speaker or not? Language?

Jcwscience commented 2 years ago

@Bahm9919 Ill try and edit my first comment to add more info when I get a free minute today. The data is a single speaker, me, and it is in English. Since I’m doing the recording myself I can record for longer times or adjust the data or text in any way that’s necessary.

Jcwscience commented 2 years ago

@Bahm9919 The only reason I thought I might need the libritts version would be to take advantage of the style or emotion transfer properties. So I have a bit more control over the output audio.

Bahm9919 commented 2 years ago

@Bahm9919 Ill try and edit my first comment to add more info when I get a free minute today. The data is a single speaker, me, and it is in English. Since I’m doing the recording myself I can record for longer times or adjust the data or text in any way that’s necessary.

In this case will be suitable warmstart training with ljs pretrained model.

Bahm9919 commented 2 years ago

@Bahm9919 The only reason I thought I might need the libritts version would be to take advantage of the style or emotion transfer properties. So I have a bit more control over the output audio.

Try to get results with LJS model. And then maybe try Libritts.

Jcwscience commented 2 years ago

@Bahm9919 You got it! Sounds like a plan.