lucidrains / voicebox-pytorch

Implementation of Voicebox, new SOTA Text-to-speech network from MetaAI, in Pytorch
MIT License
562 stars 45 forks source link

Mel model #44

Open lixuyuan102 opened 5 months ago

lixuyuan102 commented 5 months ago

May I ask if this implementation of the model has been experimented on the MEL spectrum.? I used Transformer model with only convolutional positional coding added at the beginning to get discontinuous generation results.

lucidrains commented 5 months ago

no, it is not well tested for mel

always welcome contributions

ex3ndr commented 5 months ago

I am working with Mel version in my reimplementation, AMA!

atmbb commented 5 months ago

@ex3ndr Does your model work well in zero-shot task?

ex3ndr commented 5 months ago

@atmbb which task?

atmbb commented 5 months ago

@ex3ndr Thanks for reply. I asked Style transfer in Figure 4 of paper (zero-shot TTS task)

Diverse sampling is well working. But generated speech in style transfer task does not follow prompt speech style in my model.

ex3ndr commented 5 months ago

@atmbb i just restarted training from scratch of my model. I am now at 27651 step, training on just two 4090 batch size is 16 * 8 per GPU - quite small comparing to original paper. It somehow follows, the prompt but it is too early.

In my previous run i have trained for 400k steps and it followed prompts correctly.

ex3ndr commented 5 months ago

@atmbb i remembered one thing: alibi requires longer training sequences that is easily available for training. I have been training for max 5 sec segments and audio style collapsed after ~5 seconds. I have seen same problem when audio was conditioned well for few seconds and then collapses, then i figured out that for longer conditioning audio i had less "valid" seconds. Alibi starts to work at around 300k iterations in my case, but longer context training still required.

Funny thing is that they mention degradation on longer conditioning and they have seen degradation start at ~15 seconds which is exactly how long is author's training context size.

Looking at alibi coefficients i think it needed ~2k context size for training or more to generalize well. No one expected alibi to generalize after just 500 (~5 seconds) - the coefficients are just not steep enough to vanish.

lixuyuan102 commented 5 months ago

Employing a different backbone network than the one (Transformer model with only convolutional positional coding) used in the voicebox paper to implement the ODE model, I have achieved a good zero-shot performance. However, multi-layer transformer with one convolutional positional coding layer still does not work on Mel in my experiment. I speculate that perhaps the original paper may have used multiple layers of convolutional positional encoding before the transformer module. I‘ll try to contribute the code that worked well on Mel.

ex3ndr commented 5 months ago

zero-shot.zip Trained for 50k steps and performance is now reasonable. Style is followed as it should be. (quality is still not great - need few more days of training)

ex3ndr commented 4 months ago

I have published beta: https://github.com/ex3ndr/supervoice It collapses on long sentences, also some voices are distorted (i bet it is just undertrained) and my gpt phonemiser network doesn't support numbers now.