NVIDIA / flowtron

Flowtron is an auto-regressive flow-based generative network for text to speech synthesis with control over speech variation and style transfer
https://nv-adlr.github.io/Flowtron
Apache License 2.0
889 stars 177 forks source link

Steps to replicate pretrained models on LibriTTS #57

Open ghost opened 4 years ago

ghost commented 4 years ago

First of all, thank you for the amazing paper and for releasing the code.

I have read the instructions and all the issues, but I can't find a single place with the steps that would allow me to faithfully replicate the training of the models you shared. - The Flowtron LibriTTS -

Would it be possible to provide a detailed step by step guide to do that? Something that would include exactly:

I am big fan of easy reproducibility :)

Thanks again.

rafaelvalle commented 4 years ago

ciao dario,

Our paper has detailed about how we trained the LibriTTS model. You will not be able to exactly match our training because the LSH model was trained on LJSpeech and two proprietary datasets. Nonetheless, you should be able to reproduce our results by following the steps on the paper substituting the LSH dataset with the LJS dataset. Post issues on this repo if you have them.

ghost commented 4 years ago

Ciao Rafael, Thank you for your answer.

I decided to train on LibriTTS with a warm start from your pretrained LibriTTS model.

1 Flow

As suggested I started with 1 flow. After more than 1 million steps the training and validation loss look good, together with the attention weights

Training Loss Validation Loss Attention Weights
image Screen Shot 2020-08-21 at 09 50 06 Screen Shot 2020-08-21 at 09 50 56

Results

After running the inference at different steps, I found that the ones that "sounded" the best were the ones at approximately step 580,000 (that's also where the validation loss is at its minimum) Still, the output wasn't satisfactory but at least intelligible.

2 Flows

I am training now with 2 flows, I started from the checkpoint at step 580,000 set the appropriate include layers to null and so far this is how the training is going: Training Loss Validation Loss Attention Weights 0 Attention Weights 1
Screen Shot 2020-08-21 at 09 56 30 Screen Shot 2020-08-21 at 09 56 36 Screen Shot 2020-08-21 at 09 56 42 Screen Shot 2020-08-21 at 09 56 48

Results

When I run the inference on the early steps of this 2 flow training (step 10,000) the output is still "ok"

Step 10,000 - Output 1 Step 10,000 - Output 2
sid40_sigma0 5_attnlayer0 sid40_sigma0 5_attnlayer1
At step 240,000 even though the losses are lower, the inference results are bad Step 240,000 - Output 1 Step 240,000 - Output 2
sid40_sigma0 5_attnlayer0 sid40_sigma0 5_attnlayer1

My questions:

  1. Is it expected that during the training of the 2 flow network, the output will momentarily get worse?
  2. Why are the attention weights so bad at inference time, when they are not bad during training? (See Tensorboard plots)

Thanks a lot again @rafaelvalle

rafaelvalle commented 4 years ago
  1. Yes, because the model does not know how to attend on the most recently added flow.
  2. During training we're performing a forward pass and the first flow step knows how to attend to the inputs. When we perform inference, the last flow step (closest to z) is the first to attend to the inputs but this flow step does not know how to attend given your Attention Weights 1 image.

Try inference again once your Attention Weights 1 look better.

ghost commented 4 years ago

That makes sense, thanks! Will keep you posted and summarize (for future readers) what I have done

ghost commented 4 years ago

Ok, I have been running the training with 2 flows now for a while.

This is what I see on TensorBoard

Attention Weights 1 Attention Weights 0 Validation Loss Training Loss
Screen Shot 2020-09-02 at 09 46 03 Screen Shot 2020-09-02 at 09 45 58 Screen Shot 2020-09-02 at 09 44 55 Screen Shot 2020-09-02 at 09 44 48

I would say that everything looks great.

When I run the inference everything looks (and sounds) bad

Attention Weights 0 Attention Weights 1
sid40_sigma0 5_attnlayer0 sid40_sigma0 5_attnlayer1

@rafaelvalle What would you recommend? Things looked and sounded better at the end of training with 1 flow

Thanks

rafaelvalle commented 4 years ago

Confirm that during inference the hyperparams in config.json match what is used during training. As a sanity check, generate a few sentences from the training data. Then check if the issue is sentence or speaker dependent.

ghost commented 4 years ago

config.json is the same

A couple of training sentences with speaker 40 and 887:

Speaker 40 Attention Weights 0 Attention Weights 1
sid40_sigma0 5_attnlayer0 sid40_sigma0 5_attnlayer1
Speaker 887 Attention Weights 0 Attention Weights 1
sid887_sigma0 5_attnlayer0 sid887_sigma0 5_attnlayer1

Better but not good. It seems to be sentence dependent.

rafaelvalle commented 4 years ago

If you're not, make sure to add punctuation to phrases.

ghost commented 4 years ago

I did add punctuation. Should I just train longer?

rafaelvalle commented 4 years ago

Did you try a lower value of sigma?

ghost commented 4 years ago

I was already running it with sigma=0.5

rafaelvalle commented 4 years ago

Try something even more conservative, 0.25. Is this model trained with speaker embeddings? Also, can you share the phrases you've been evaluating?

rafaelvalle commented 4 years ago

What happens if you set n_frames to be 6 times the number of tokens?

ghost commented 4 years ago

Yes, the model is trained with speaker embeddings.

Here are some examples:

I set sigma as low as 0.25 as you suggested.

"I was good enough not to contradict this startling assertion." -i 887 -s 0.25 
"Then one begins to appraise." -i 1116 -s 0.25
"Now let us return to your particular world." -i 40 -s 0.25

And in the inference.py script I added the computation for n_frames

text = trainset.get_text(text).cuda()
n_frames = len(text)*6

Still bad results

rafaelvalle commented 4 years ago

Try these modifications to the phrases:

"I was good enough to contradict this startling assertion."
"Now let us return your particular world."
ghost commented 4 years ago

Speaker 40: "Now let us return your particular world."

Attention Weights 0 Attention Weights 1
sid40_sigma0 25_attnlayer0 sid40_sigma0 25_attnlayer1
Speaker 887: "I was good enough to contradict this startling assertion." Attention Weights 0 Attention Weights 1
sid887_sigma0 25_attnlayer1 sid887_sigma0 25_attnlayer0
rafaelvalle commented 4 years ago

That's very surprising. Give us some time to look into it.

ghost commented 4 years ago

Thanks a lot! I really appreciate your help. Please let me know if I can be more involved in the investigation

ghost commented 4 years ago

One thing: there are differences in the output when running the inference on different checkpoints. Still, none of them are good enough, but there are significant fluctuations of course.

rafaelvalle commented 4 years ago

Are the speaker ids you're sharing the LibriTTS ids? The model should have about 123 speakers.

ghost commented 4 years ago

Yes, from the LibriTTS ids: list

rafaelvalle commented 4 years ago

I synthesized the 3 phrases with our LibriTTS-100 model trained with speaker embedding using sigma=0.75 and n_frames = 1000.

Your attention weights during training look really good and your validation loss is similar to what we reached. Can you share your model weights?

phrases.zip

ghost commented 4 years ago

Those phrases sound like what I'd like to hear.

I uploaded the checkpoint I used here

There is one small difference in the dataset: Speaker number 40 has a few sentences that were taken away

This is the config file

This is the training files list

ghost commented 4 years ago

@rafaelvalle did you manage to run the inference using the weights I shared? Thanks

rafaelvalle commented 4 years ago

Yes, I get similar results to your results by using your model. Will take a look at your model once the paper deadlines are over.