Open lixuyuan102 opened 5 months ago
no, it is not well tested for mel
always welcome contributions
I am working with Mel version in my reimplementation, AMA!
@ex3ndr Does your model work well in zero-shot task?
@atmbb which task?
@ex3ndr Thanks for reply. I asked Style transfer in Figure 4 of paper (zero-shot TTS task)
Diverse sampling is well working. But generated speech in style transfer task does not follow prompt speech style in my model.
@atmbb i just restarted training from scratch of my model. I am now at 27651 step, training on just two 4090 batch size is 16 * 8 per GPU - quite small comparing to original paper. It somehow follows, the prompt but it is too early.
In my previous run i have trained for 400k steps and it followed prompts correctly.
@atmbb i remembered one thing: alibi requires longer training sequences that is easily available for training. I have been training for max 5 sec segments and audio style collapsed after ~5 seconds. I have seen same problem when audio was conditioned well for few seconds and then collapses, then i figured out that for longer conditioning audio i had less "valid" seconds. Alibi starts to work at around 300k iterations in my case, but longer context training still required.
Funny thing is that they mention degradation on longer conditioning and they have seen degradation start at ~15 seconds which is exactly how long is author's training context size.
Looking at alibi coefficients i think it needed ~2k context size for training or more to generalize well. No one expected alibi to generalize after just 500 (~5 seconds) - the coefficients are just not steep enough to vanish.
Employing a different backbone network than the one (Transformer model with only convolutional positional coding) used in the voicebox paper to implement the ODE model, I have achieved a good zero-shot performance. However, multi-layer transformer with one convolutional positional coding layer still does not work on Mel in my experiment. I speculate that perhaps the original paper may have used multiple layers of convolutional positional encoding before the transformer module. I‘ll try to contribute the code that worked well on Mel.
zero-shot.zip Trained for 50k steps and performance is now reasonable. Style is followed as it should be. (quality is still not great - need few more days of training)
I have published beta: https://github.com/ex3ndr/supervoice It collapses on long sentences, also some voices are distorted (i bet it is just undertrained) and my gpt phonemiser network doesn't support numbers now.
May I ask if this implementation of the model has been experimented on the MEL spectrum.? I used Transformer model with only convolutional positional coding added at the beginning to get discontinuous generation results.