Closed zhangxinaaaa closed 3 years ago
Hi, could you provide your generated .wav files? Thanks.
Here is the notebook link that i inference the demo samples, you can hear and download the generated wave file. Thanks for your time and reply!
Hi, the converted sample you used in the colab is a sample generated by AdaIN-VC, not our proposed model, AGAIN-VC. Besides, the released pretrained model is trained using the latest code on this repo, thus the generated result might not be the same as the ones on the demo page. Thanks!
Can you offer the code that generate the demo samples? I'm working on it and wish to learn from the code.Thank you very much
Do you mean the models trained using other methods (AdaIN-VC, VQVC+, AutoVC)?
Yes, thank you very much!
Hi, I'm working on integrating these models. Yet these models are open-source, you may try to train these models using their released code on GitHub. The thing you need to pay attention to is that we use MelGAN as the vocoder for every model on the demo page.
Ok! And by the way, I enjoy listening to Hung-yi Lee's lessons very much, thanks for your hard works!
Hey, thank you for your great work! i used your pretrained model file to inference the demo samples. But the conversion effect is not as perfect as the demo's. Is it beause of the pretrained model need more time training?