152334H / DL-Art-School

TorToiSe fine-tuning with DLAS
GNU Affero General Public License v3.0
215 stars 100 forks source link

Train other models in the pipeline #3

Open 152334H opened 1 year ago

152334H commented 1 year ago

Apart from the GPT model (which has been implemented), there are 4 other models in TorToiSe that could be fine-tuned:

IMO, the diffusion model + vocoder are obvious targets. Vocoders are often fine-tuned in other tts pipelines, and the diffusion model serves roughly the same purpose...

...but, the diffusion model is the only other model that takes the conditioning latents into account. I suspect that fine-tuning both the autoregressive & diffuser models on a single speaker would lead to a kind of 'mode collapse' (bear with this inaccurate phrasing), where the conditioning latents fail to affect the output speech substantially. Ideally, some form of mixed speaker training would account for this, but I'm not sure how to accomplish that yet.

Training the VQVAE could be good for datasets that are emotional, and substantially different from the normal LJSpeech+libretts+commonvoice+voxpopuli+... pile of monotonic speech. But I think it would necessitate a parallel training of the GPT model + the CLVP model as well, to account for the change in tokens outputted.

I also think that keeping the CLVP model untrained could be a good idea to retain the power of conditioning latents. Fine-tuning it on a single voice would adjust it to see that specific speaker as more likely than other speakers.

Ryu1845 commented 1 year ago

Might be relevant https://github.com/yuan1615/AdaVocoder

152334H commented 1 year ago

DIFFUSION TRAINING PROGRESS

devilismyfriend commented 1 year ago

Did the new configs and changes improve the diffusion model training?

152334H commented 1 year ago

What I did was to try to train the diffusion model on top of a fairly broken gpt fine-tune... which was evidently a bad idea; I couldn't tell whether it was significantly better or not. I vaguely think "it works" but honestly I should figure out how to enable the FID eval metrics first.

caffeinetoomuch commented 1 year ago

Hi, is this still ongoing? I was trying to train the diffusion model from the template yaml(../experiments/FIXED_diff.yml), but it was throwing unexpected keys in loading gpt_latent model. I gave the path of autoregressive model for produce_latents section. Should I be passing the different model?

152334H commented 1 year ago

nope this entire repo + project is dead (i got poached)

xtts seems at least marginally better, i'd just ask around coqui how to train stuff