liusongxiang / StarGAN-Voice-Conversion

This is a pytorch implementation of the paper: StarGAN-VC: Non-parallel many-to-many voice conversion with star generative adversarial networks
https://arxiv.org/abs/1806.02169
513 stars 93 forks source link

Are there any requirements for training datasets? #6

Closed leokwu closed 5 years ago

leokwu commented 5 years ago

I put my own Chinese corpus into the training model, and the conversion effect is not as good as that of the original English corpus. Are there any requirements for training datasets?

Looking forward to answer.

thks.

mahmoudalismail commented 5 years ago

There are many reasons why you could be getting a bad performance on your Chinese corpus. The first obvious reason is that you could have very few recordings in your corpus. Also, it could be that the recordings in your corpus have different noise types or are just generally not clean/crisp.

Testing this system on a Chinese corpus is definitely interesting as the authors of the paper tested the system on English US speech. I am not sure how would this StarGAN perform given that Chinese speech is very musical and subtle differences in the phonemes could mean completely different things.. In case you run into intelligibility problems, you could try to increase cycle consistency loss parameter when training, maybe that would work. I haven't tried it myself. I am speculating it would as the network would give more weight to preserving the linguistic component of the conversion. If you do try it, please try it and let us know, I am interested in the results. :smiley:

tarepan commented 5 years ago

In my experience, StarGAN-VC/CycleGAN-VC is also strongly affected by speaker.
When I trained my StarGAN-VC implementation on Japanese corpus with 3 speaker (A, B, C), B2A and C2A work well but A2B, A2C, B2C is not good.

Anyway, As mentioned above, Chinese (and Japanese also) have different acoustic characters compared with English, so if @leokwu will try fine-tune the model, I am strongly interested in the results.