Closed avi33 closed 1 year ago
In section 2.3.2 in paper, you have mentioned that the input in speaker encoder varies between training stages. Can you please point ib code where it takes place?
Do yiu train end to end or given trained generator? Thanks in advance
Please refer to this: https://github.com/quickvc/QuickVC-VoiceConversion/issues/10
the model is trained end to end
Thank you for your quick response.
In section 2.3.2 in paper, you have mentioned that the input in speaker encoder varies between training stages. Can you please point ib code where it takes place?
Do yiu train end to end or given trained generator? Thanks in advance