auspicious3000 / SpeechSplit

Unsupervised Speech Decomposition Via Triple Information Bottleneck
http://arxiv.org/abs/2004.11284
MIT License
636 stars 92 forks source link

Question: Using a pretrained encoder for getting the speaker embedding. #29

Open nischal-sanil opened 3 years ago

nischal-sanil commented 3 years ago

Hi,

Did you guys experiment using a pretrained encoder for getting the speaker embedding similar to your previous work (AutoVC).

PS: Amazing work by the way!

Thanks,

FurkanGozukara commented 3 years ago

@nischal-sanil did you make it work?

can you check my question please? https://github.com/auspicious3000/SpeechSplit/issues/28

terbed commented 3 years ago

I have the same question @auspicious3000 Here you use the one-hot encoded embedding with a lent of 82 (the number of speakers it was pretrained), but could you generate a zeros-shot general embedding like in AutoVC. If I am correct the size of the used embedding was larger in that, I assume you cannot use that here.

So to wrap up: this method with the pretrained weights works only on the 82 speakers it was trained and conditioned on if we consider only the timbre conversion?

auspicious3000 commented 3 years ago

@terbed Yes. Unless you retrain the model.