auspicious3000 / autovc

AutoVC: Zero-Shot Voice Style Transfer with Only Autoencoder Loss
https://arxiv.org/abs/1905.05879
MIT License
983 stars 207 forks source link

question about training loss and inference performance #61

Open zzw922cn opened 3 years ago

zzw922cn commented 3 years ago

Hi, thank you for your very nice work! I have rerun this project, and it has run 90K steps. the loss_id_psnt is around 0.07. And I tried to feed into a in-domain speaker's melspec and his speaker embedding as source embedding, and another speaker's speaker embedding as target speaker embedding. Then I use GL vocoder to generate the wav, I found the voice is still of the source speaker. Is this normal? When can I perform voice conversion successfully? at what step or what's the loss_id_psnt? thank you very much!!

image

auspicious3000 commented 3 years ago

You probably need to fine-tune your bottleneck dimensions.

zzw922cn commented 3 years ago

Do you think I should enlarge the bottleneck dimension or decrease the bottleneck dimension?

auspicious3000 commented 3 years ago

There's detailed information in the paper on how to tune the bottleneck.

zzw922cn commented 3 years ago

OK, thank you~

ruclion commented 3 years ago

Do you think I should enlarge the bottleneck dimension or decrease the bottleneck dimension?

the paper said: The first model, which we name the “too narrow” model, reduces the dimensions of C1→ and C1← from 32 to 16, and increases the downsampling factor from 32 to 128 (note that higher downsampling factor means lower temporal dimension). The second model, which we name the “too wide” model, increases the dimensions of C1→ and C1← to 256, and decreases the sampling factor to 8, and λ is set to 0

But for new dataset, how to choose the hparams? And wheather we should use DANN idea? Hope to communicate with you~

innovator1311 commented 3 years ago

Hi, thank you for your very nice work! I have rerun this project, and it has run 90K steps. the loss_id_psnt is around 0.07. And I tried to feed into a in-domain speaker's melspec and his speaker embedding as source embedding, and another speaker's speaker embedding as target speaker embedding. Then I use GL vocoder to generate the wav, I found the voice is still of the source speaker. Is this normal? When can I perform voice conversion successfully? at what step or what's the loss_id_psnt? thank you very much!!

image

@zzw922cn Call you tell me which dataset you used and the batch size of training process ? Thanks in advance !!