Thank you very much for sharing your codes. It's a great work!
Currently, I am training the single source model on the preprocessed Voxceleb1 dataset (the 7.8 Gb one with 1fps). I found the training parameters used in the codes are different from the ones in the paper. For example, the learning rate is 0.0001 instead of 0.001, and the batch size is 128 instead of 32. I used the default parameters set in the codes, and found the training converges slow. Here are the training curve:
However, as you mentioned in this issue #1 , the training converged fast on Vox1, which is < 1000 iterations.
I used your data loader to load the dataset. Should I set the parameters as you mentioned in the paper?
Hi Olivia,
Thank you very much for sharing your codes. It's a great work!
Currently, I am training the single source model on the preprocessed Voxceleb1 dataset (the 7.8 Gb one with 1fps). I found the training parameters used in the codes are different from the ones in the paper. For example, the learning rate is 0.0001 instead of 0.001, and the batch size is 128 instead of 32. I used the default parameters set in the codes, and found the training converges slow. Here are the training curve:
However, as you mentioned in this issue #1 , the training converged fast on Vox1, which is < 1000 iterations.
I used your data loader to load the dataset. Should I set the parameters as you mentioned in the paper?
Thanks!