We provide a PyTorch implementation of the paper Voice Separation with an Unknown Number of Multiple Speakers In which, we present a new method for separating a mixed audio sequence, in which multiple voices speak simultaneously. The new method employs gated neural networks that are trained to separate the voices at multiple processing steps, while maintaining the speaker in each output channel fixed. A different model is trained for every number of possible speakers, and the model with the largest number of speakers is employed to select the actual number of speakers in a given sample. Our method greatly outperforms the current state of the art, which, as we show, is not competitive for more than two speakers.
Other
1.26k
stars
181
forks
source link
Can't separate using CPU while net trained on GPU #20
I have trained net with GPU,
I'm truing to separate some files from mix folder:
!python -m svoice.separate /path/to/checkpoint.th /path/to/separated_output --mix_dir=/path/to/mix --device="cpu"
And I have error message:
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
P.S. I have also tryed different way to point cpu: --device=cpu, --device='cpu'.
I have trained net with GPU, I'm truing to separate some files from mix folder: !python -m svoice.separate /path/to/checkpoint.th /path/to/separated_output --mix_dir=/path/to/mix --device="cpu"
And I have error message:
RuntimeError: Attempting to deserialize object on a CUDA device but torch.cuda.is_available() is False. If you are running on a CPU-only machine, please use torch.load with map_location=torch.device('cpu') to map your storages to the CPU.
P.S. I have also tryed different way to point cpu: --device=cpu, --device='cpu'.