kan-bayashi / PytorchWaveNetVocoder

WaveNet-Vocoder implementation with pytorch.
https://kan-bayashi.github.io/WaveNetVocoderSamples/
Apache License 2.0
297 stars 57 forks source link

about speaker #44

Closed assyoucan closed 5 years ago

assyoucan commented 5 years ago

I would like to ask, if I use A's data to train the network, after training, the input sound becomes B, then the effect is good? or need to use B data to train again.

oytunturk commented 5 years ago

It works reasonably well according to my limited tests. Quality and similarity to speaker B might be a bit off. Speaker-independent training recipes might work better or could be more robust to speaker A/B differences.

kan-bayashi commented 5 years ago

In the case of voice conversion, we usually train a multi-speaker model and then fine-tune the model using small amount of single speaker data.