KimythAnly / AGAIN-VC

This is the official implementation of the paper AGAIN-VC: A One-shot Voice Conversion using Activation Guidance and Adaptive Instance Normalization.
https://kimythanly.github.io/AGAIN-VC-demo/index
MIT License
111 stars 19 forks source link

Not great as demo #9

Closed zhangxinaaaa closed 3 years ago

zhangxinaaaa commented 3 years ago

Hey, thank you for your great work! i used your pretrained model file to inference the demo samples. But the conversion effect is not as perfect as the demo's. Is it beause of the pretrained model need more time training?

KimythAnly commented 3 years ago

Hi, could you provide your generated .wav files? Thanks.

zhangxinaaaa commented 3 years ago

Here is the notebook link that i inference the demo samples, you can hear and download the generated wave file. Thanks for your time and reply!

KimythAnly commented 3 years ago

Hi, the converted sample you used in the colab is a sample generated by AdaIN-VC, not our proposed model, AGAIN-VC. Besides, the released pretrained model is trained using the latest code on this repo, thus the generated result might not be the same as the ones on the demo page. Thanks!

zhangxinaaaa commented 3 years ago

Can you offer the code that generate the demo samples? I'm working on it and wish to learn from the code.Thank you very much

KimythAnly commented 3 years ago

Do you mean the models trained using other methods (AdaIN-VC, VQVC+, AutoVC)?

zhangxinaaaa commented 3 years ago

Yes, thank you very much!

KimythAnly commented 3 years ago

Hi, I'm working on integrating these models. Yet these models are open-source, you may try to train these models using their released code on GitHub. The thing you need to pay attention to is that we use MelGAN as the vocoder for every model on the demo page.

zhangxinaaaa commented 3 years ago

Ok! And by the way, I enjoy listening to Hung-yi Lee's lessons very much, thanks for your hard works!