152334H / DL-Art-School

TorToiSe fine-tuning with DLAS
GNU Affero General Public License v3.0
214 stars 96 forks source link

Multispeaker dataset #44

Open HobisPL opened 1 year ago

HobisPL commented 1 year ago

What should a dataset for Multispeaker look like? Should each speaker have an identifier at the end, for example:

wavs/1.wav|transcription. or wavs/1.wav|transcription.|1 or wavs/1.wav|transcription.|speaker_name

152334H commented 1 year ago

It's all learned implicitly. There's no fundamental difference between a single-speaker and a multi-speaker dataset apart from the variance of the distribution of conditioning latents && predicted audio.

It is perhaps better to actually model each wav file as an individual speaker -- each speaker is a point on latent space, and there are general clusters corresponding to individual characters, and perhaps you could circle each cluster and label it as the broad space of a single speaker's voice, but in practice there ought to be overlaps for a sufficiently diverse multispeaker dataset

152334H commented 1 year ago

I think there is a potential idea to be applied here, actually -- you could try to apply the exact same conditioning latent for EVERY line said by a specific character. But that would require additional code and stuff

"Multispeaker" in the current case just means exposing the model to more kinds of speakers during training. Ideally the model would learn to clone all of them, conditionally on the input zero-shot latent; in practice it underfits severely with the short number of epoches available in fine-tuning. I suspect a much much longer training run might teach the model to correctly remember all speakers, but it might also just lead to terrible overfitting on the existing lines

LorenzoBrugioni commented 1 year ago

Hi and thanks for your work. So as of now, if I fine tune on a single speaker dataset i will become a single speaker model, or at least it seems to me. Even when I use the conditional latents from another speaker during zero shot inference, i get always the voice of the speaker I fine tuned on.

SiddantaK commented 10 months ago

Hi, is it possible to train this model for a multispeaker dataset? if there is then, can u give the information in detail? Thank you in advance.

GuenKainto commented 9 months ago

I try to train with multispeaker dataset but it have so thing wrong i training another language with data ManVoice , when i try to clone , it hard to clone Woman voice or baby voice , but it can clone another man voice (80%) . And womanVoice like this too - can clone man Voice but good in womanVoice (some high voice it will hard or have some anoying noise). When i training with mix data of man and woman voice, when i clone , the output voice look like radom voice. Some time man, some time woman, not a voice a want to clone.