keonlee9420 / Cross-Speaker-Emotion-Transfer

PyTorch Implementation of ByteDance's Cross-speaker Emotion Transfer Based on Speaker Condition Layer Normalization and Semi-Supervised Training in Text-To-Speech
MIT License
187 stars 27 forks source link

the semi-supervised used #15

Open yiwei0730 opened 1 year ago

yiwei0730 commented 1 year ago

The current implementation is not trained in a semi-supervised way due to the small dataset size. But it can be easily activated by specifying target speakers and passing no emotion ID with no emotion classifier loss. This is the Notes you writed in ReadME Can i know about the "specifying target speakers and passing no emotion ID with no emotion classifier loss", how to use in your code?