We provide a PyTorch implementation of the paper Voice Separation with an Unknown Number of Multiple Speakers In which, we present a new method for separating a mixed audio sequence, in which multiple voices speak simultaneously. The new method employs gated neural networks that are trained to separate the voices at multiple processing steps, while maintaining the speaker in each output channel fixed. A different model is trained for every number of possible speakers, and the model with the largest number of speakers is employed to select the actual number of speakers in a given sample. Our method greatly outperforms the current state of the art, which, as we show, is not competitive for more than two speakers.
Actually I was trying to train the svoice model using the libri2mix data (sample_rate = 16000) and total samples present are around 13900 on a multi-gpu (4xK80) setup using ddp=1. But the training time for each epoch is fairly high over 6hours per epoch. So just wanted to check in is it expected or something is going wrong here.
Hi @spikeeSakshu,
It depends on the segment size and stride you use in the config file.
You can try to use bigger strides / smaller segments.
Generally speaking you can look at the runtime per sample and not per epoch as the number of examples per epoch can vary.
Hey,
Actually I was trying to train the svoice model using the libri2mix data (sample_rate = 16000) and total samples present are around 13900 on a multi-gpu (4xK80) setup using
ddp=1
. But the training time for each epoch is fairly high over 6hours per epoch. So just wanted to check in is it expected or something is going wrong here.Thank you.