Closed Charlottecuc closed 2 years ago
The input is independent of speakers, so you can just feed in whatever speaker you want to convert, and the same goes for singing input. If you'd like to train your own model, you can simply use singing data and it will work for it as well. You may need to find a better vocoder for singing though because ParallelWaveGAN isn't very good for singing synthesis.
Hi. The work is amazing. I notice that the inference.py file only support many-to-many conversion. Could you tell me how to modify it to any-to-many conversion and also singing voice conversion? Thank you very much.