keonlee9420 / Comprehensive-Transformer-TTS

A Non-Autoregressive Transformer based Text-to-Speech, supporting a family of SOTA transformers with supervised and unsupervised duration modelings. This project grows with the research community, aiming to achieve the ultimate TTS
MIT License
319 stars 41 forks source link

New TTS Model request #3

Open rishikksh20 opened 2 years ago

rishikksh20 commented 2 years ago

Recently two papers regarding Transformer TTS pops up and I think both are suitable for this repo:

1) DelightfulTTS: The Microsoft Speech Synthesis System for Blizzard Challenge 2021 2) Emphasis control for parallel neural TTS

I think both are easy to implement and well suited for this repo.

keonlee9420 commented 2 years ago

Hi @rishikksh20, thanks for the requests! I can see that they fit well with this project. I will look into it and hope that I can merge them with this repo :)

rishikksh20 commented 2 years ago

Hi @keonlee9420 , DelightfulTTS is similar to Phone Level Mixture Density Network but here instead of using complicated GMM based model author directly used latent representation for Prosody Predictor and Prosody encoder. Phoneme level prosody encoder and Utterance level encoder are similar to this. I think they simply uses Global Style Token(GST) module as Utterance level encoder.

rishikksh20 commented 2 years ago

DelightfulTTS learn Phoneme level prosody implicitly whereas Emphasis control for parallel neural TTS learn same explicitly by extracting features from this repo.

rishikksh20 commented 2 years ago

I think DelightfulTTS is all in one solution, it uses non-autoregressive architecture with conformer blocks and both Utterance level and Phoneme level predictor as well.

keonlee9420 commented 2 years ago

Thank you for the summary. The DelightfulTTS model seems worth a try as you depicted. I will try it and share through the update soon!

rishikksh20 commented 2 years ago

@keonlee9420 Hi, are you able to train DelightfullTTS successfully ?

keonlee9420 commented 2 years ago

Yes, but it shows overfitting issue. I guess this issue originated from the limited capacity of the prosody predictor since I can confirm that the prosody embedding extracted from prosody extractor can actually improve the expressiveness including the validation loss.

rishikksh20 commented 2 years ago

Have you train predictor and extractor simultaneously or train extractor for 100k steps first then pause it and then start predictor training in teacher forcing method like mentioned in AdaSpeech paper ?

rishikksh20 commented 2 years ago

Because in my case I do some modification in architecture, I used same extractors as mentioned in DelightfullTTS 's papers but I am not using any predictor for utterance level because I want to use it similarly as GST-Tacotron by passing external reference mel, and for phoneme level predictor I used similar predictor architecture as in original Adaspeech's which is similar to duration and pitch predictor. And I train Phoneme level extractor for 100k then stop it and then start predictor training. But while training this, till 2000 steps with 32 batch size model loss works perfectly but after around 2200 steps loss start increasing and not converge and output is just noise. But when I passed detached hidden state to Phoneme level extractor then it train perfectly and even latent variable also working, I am able to change emotion using latent variable of Phoneme -level predictor.

keonlee9420 commented 2 years ago

ah, thanks for sharing. I trained jointly without any detach or schedule from the first step. So what you mean is

  1. training only the prosody extractor (not predictor) until 100k
  2. start training the prosody predictor but with a detached prosody embedding from prosody extractor (still the prosody extractor is also on its training) right? Or in 2, do you mean no gradient flows back to even the prosody extractor too?
rishikksh20 commented 2 years ago

I suggest 1

rishikksh20 commented 2 years ago

@keonlee9420 In your experience which perform better normal Transformer encoder or Conformer when you have only 20 hours of speech data?

rishikksh20 commented 2 years ago

As per this article Microsoft TTS api built on DelightfullTTS.

hdmjdp commented 2 years ago

can you share your code

I suggest 1 @rishikksh20

hdmjdp commented 2 years ago

detached hidden state

@rishikksh20 Does this refer to text encoder output?

rishikksh20 commented 2 years ago

detached hidden state

@rishikksh20 Does this refer to text encoder output?

yes

hdmjdp commented 2 years ago

@rishikksh20 After 100k,, does the prams of prodsody extractor update or just frozen?

v-nhandt21 commented 2 years ago

Is there any confirmation on the quality of the Transformer encoder or Conformer, I found that the conformer in DelightfulTTS is quite different from ASR a little bit.

rishikksh20 commented 2 years ago

@v-nhandt21 yes conformer in TTS is modified version of ASR one.