Open Approximetal opened 4 years ago
Hi @Approximetal,
Quick question: are those samples from out-of-domain speakers or were those speakers in the training data?
Hi @Approximetal,
Quick question: are those samples from out-of-domain speakers or were those speakers in the training data?
They are in the training dataset.
Hmmm, how large is the dataset (in hours)? Also how much data is there for each speaker? I'm not sure how much impact dataset size has but it may be a good first thing to investigate.
Otherwise, a good option might be to try finetuning the pretrained model on your dataset?
I use four dataset training together, including VCTK-Corpus, VCC2020, a Chinese Dataset about 24h and a multi-lingual audiobook dataset about 150h. By the way, how to fine-tune the pre-train model? I use VCTK and vcc2020 to fine-tune, based on the 1000k pre-trained model, decrease batch size to 2 and learning rate 5e-5, but it doesn't work.
Sorry about the delay @Approximetal. That's a lot of data so I don't think finetuning is necessary. Since you have a big dataset my guess is that the large variation in speakers, recording conditions, and background noise causes the output distribution over next audio samples to be flatter. Sampling from the distribution could then introduce some noise.
I'm not sure how to get rid of the noise completely. One option is to try a larger model. If you have time to experiment you could increasing fc_channels
or conditioning_channels
here. Otherwise, if you only intend on generating audio from a smaller subset of speakers at test time you could train for a few epochs on only those speakers.
If you try any of these ideas or get better results some other way please let me know. Hope that helps.
Sorry about the delay @Approximetal. That's a lot of data so I don't think finetuning is necessary. Since you have a big dataset my guess is that the large variation in speakers, recording conditions, and background noise causes the output distribution over next audio samples to be flatter. Sampling from the distribution could then introduce some noise.
I'm not sure how to get rid of the noise completely. One option is to try a larger model. If you have time to experiment you could increasing
fc_channels
orconditioning_channels
here. Otherwise, if you only intend on generating audio from a smaller subset of speakers at test time you could train for a few epochs on only those speakers.If you try any of these ideas or get better results some other way please let me know. Hope that helps.
Thank you for your advice, I will try it later, and update my result to you. Actually I tried to increase the number of layers and the dimension of the network, but the loss didn't decrease, I haven't found the reason.
Also, sometimes sharp noise occurs at the head of the synthesized audio. When I comment out the padding mel = torch.nn.functional.pad(mel, [0,0,4,0,0,0], "constant")
the sharp noise disappeared. Do you know why it happens?
Thanks for the update @Approximetal,
My guess is that adding more layers might result in vanishing gradients. Just changing the width of the layers might work though. @wade3han also reported a similar noise issue at the beginning of the audio in #12. I think this may be because the initial hidden state of the RNN layers may be off. Adding a few silent frames to the spectrogram might give the RNNs a chance to warm up. Otherwise, as I mention here, it might be worth training without slicing out the middle segment before passing it to the autoregressive layer. This way the network can learn to generate audio using the initial RNN state.
Sorry I made a mistake, I mean when I comment out the padding in inference generation step, the noise disappeared. BTW, do I only need to adjust fc_channels
and conditioning_channels
? What about others? Is it necessary to expand other channels to match the increase of fc_channels
?
I tried two parameters:
"conditioning_channels": 512,
"embedding_dim": 512,
"rnn_channels": 896,
"fc_channels": 512,
and
"conditioning_channels": 256,
"embedding_dim": 256,
"rnn_channels": 896,
"fc_channels": 512,
But the loss didn't decrease at all... @bshall
Hi @Approximetal,
That's weird. I'll look into those parameters and get back to you shortly. The changes you made are correct so I'm not sure what the problem is.
Hi, I use your method training on my own dataset, for 1000k iterations, it sounds stable, have only a little background noise. But the loss maintains around 2.6, and the noise didn't disappear after another 1000k steps. I have tried to reduce the batchsize to 2 and learning rate 5e-5, but it doesn't work. How can I deal with it? samples.zip