huawei-noah / Speech-Backbones

This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab.
565 stars 119 forks source link

About end2end implementation #12

Open quangnh-2761 opened 2 years ago

quangnh-2761 commented 2 years ago

Hi, thank you for sharing your excellent work. I want to ask about your end-to-end TTS model. In the paper, you stated that only the decoder is changed such that it can generate waveform (by using WaveGrad architecture). So the $\mu$ vector is no longer mean statistics of melspectrogram, just simply hidden features. I wonder what probability you put into monotonic alignment if the model can not access to mel features (mux and y in your code). Did you still keep the $\mathcal{L}{enc}$ loss that constrain similarity between mu_y and mel spectrogram? Furthermore, since we do not know mean statistics of waveform, does the SDE have to change also? I have listened to some samples from e2e model in your website and noticed that although audio had noises, their alignments were quite decent, did you try to improve audio's quality with adversarial loss like Hifi gan? Thank you.

ivanvovk commented 2 years ago

@quangnh-2761 Hi! Yes, we've tried several experiments with end2end pipeline, but for some reason training on raw waveforms with mean-shifted terminal distribution was not stable enough. The samples you've listened to at the demo page were synthesized with the end2end model, which generates audios from the pure Gaussian noise, meaning starting denoising process from $\mathcal{N}(0, \mathbf{I})$, not from $\mathcal{N}(\mu, \mathbf{I})$ (however, there are recent studies how to apply this concept to raw waveforms, check SpecGrad paper). Decoder WaveGrad architecture was conditioned on aligned mel-spectrograms obtained from text encoder directly. Thus, to train the duration predictor with MAS we still used $\mathcal{L}_{enc}$ loss on mel-specs. Conditioning decoder with uncurated sharp mel-specs from text encoder is the main reason why the final quality is bad. Adding some intermediate layers like in current end2end (VITS, NaturalSpeech) pipelines or increasing number of the WaveGrad parameters significantly (like in WaveGrad 2) can potentially solve the problem.

quangnh-2761 commented 2 years ago

Thanks for your informative response. I will reproduce on my data and try other end2end architectures to check if they help.

quangnh-2761 commented 2 years ago

Wavegrad large indeed helps, but interestingly outputs on my dataset with wavegrad base architecture are not distorted, I will experiment with LJSpeech to find what the difference is. Can I ask which noise schedule you tried, I was struggle to find suitable $\beta_0$ and $\beta_1$ and decided to train with $1e-4$ to $1$ (corresponding to 100 steps $1e-6$ to $1e-2$ in ddpm)

ivanvovk commented 2 years ago

@quangnh-2761 Great! We've used the same noise schedule as in original WaveGrad work: $[1e-6, 1e-2]$. In case of Grad-TTS implementation, you should multiply it by 1000 ($t\in[0,1]$ continuous process discretization) when setting in params.py.

quangnh-2761 commented 2 years ago

Thank you. For some reason wavegrad base with $[1e-3,10]$ generated noisy audio (maybe I didn't train long enough), while output of wavegrad base with $[1e-4,1]$ have decent quality. Wavegrad large's performance is good on both schedules.

ivanvovk commented 2 years ago

@quangnh-2761 Good! Can you share some audio samples to listen to?

quangnh-2761 commented 2 years ago

Sorry but I can not share samples from my datasets because of privacy policy, but I am experimenting with LJSpeech. I will send some samples when finish training.

quangnh-2761 commented 2 years ago

https://drive.google.com/drive/folders/1OCK_CD6nFmQZGPd_4hSdJLEN_ME1PxIU here are some samples from base and large models, trained with ~1k epochs. I think they are acceptable but still not perfect, I keep training to see if they improve (wavegrad 2 with base model can reach nearly 3.9 MOS, maybe because of other small details). A problem is that I can not do batch inference, because in the training phase I did random segment waveform to fit into memory, so model did not know how to deal with padding when inference, output would soon explode to infinity if I multiply with mask. Otherwise, generated audio might be affected by noise from padding frames. Another thing is that I have to sample up to 1k steps to obtain good output. I have tried few steps schedule in wavegrad but it cant converge (I have divided $\beta$ by 1000 and converted score to noise to match with wavegrad). Do you know how to find few steps schedule manually? Edited: It's because I used a wrong formula to convert time, when correct schedule is used it works.

ivanvovk commented 2 years ago

@quangnh-2761 I see, good! Seems like hard-increase of parameters number really helps. To improve the inference speed I can suggest to try noise schedule grid-search for needed number of solver steps. Or you can add to your existing model and use our novel reverse SDE solver: https://arxiv.org/abs/2109.13821. It requires much lower number of steps to produce good quality samples. It is easy to implement, you don't need to re-train the model.

quangnh-2761 commented 2 years ago

Thank you, I will check your work on fast solver. For my dataset, its language (Vietnamese) is single syllable and has no connection between words when pronounce so I think it's easier to learn and harder to identify errors, IMO model's size is kind of data dependent.