auspicious3000 / autovc

AutoVC: Zero-Shot Voice Style Transfer with Only Autoencoder Loss
https://arxiv.org/abs/1905.05879
MIT License
983 stars 207 forks source link

Downsampling process is different from that described in the paper #22

Open light1726 opened 4 years ago

light1726 commented 4 years ago

Thanks for sharing the code and you did a great job. I noticed that in the paper the downsampling process on the temporal axis is different for the forward sequence and the backward sequence. But it seems that the downsampling operation for the forward sequence in the code follows exactly the process described in the paper for the backward sequence. I'm quite confused because these two processes (what described in the code and in the paper) seems to behave differently for that they encode different contextual information. Since the code is more up-to-date, does the downsampling process in the code is better? image https://github.com/auspicious3000/autovc/blob/2d8a6c8856f59c1c2af5cf8d4143b0a9605fbe0e/model_vc.py#L79

light1726 commented 4 years ago

By the way, do you cut the input sequence into fixed-length which is the multiple of the downsampling frequency during training? if so how long is the fixed-length?

auspicious3000 commented 4 years ago

Thanks. The code is correct. 2 seconds.

light1726 commented 4 years ago

I see, thanks for the answer.