facebookresearch / svoice

We provide a PyTorch implementation of the paper Voice Separation with an Unknown Number of Multiple Speakers In which, we present a new method for separating a mixed audio sequence, in which multiple voices speak simultaneously. The new method employs gated neural networks that are trained to separate the voices at multiple processing steps, while maintaining the speaker in each output channel fixed. A different model is trained for every number of possible speakers, and the model with the largest number of speakers is employed to select the actual number of speakers in a given sample. Our method greatly outperforms the current state of the art, which, as we show, is not competitive for more than two speakers.
Other
1.23k stars 178 forks source link

Guidelines for model training #61

Open arikhalperin opened 2 years ago

arikhalperin commented 2 years ago

Hello,

Any idea about the following:

1) What is the optimal amount of call data per language we should use? 2) Should we build a model per language? Can we build one multilingual model that will work well across languages?(The model I built for english, was great for english, not so good for spanish)

With Best Regards, Arik Halperin

adiyoss commented 2 years ago

Hi @arikhalperin, I think it can work under the multilingual setting, we actually did something like that (training on English and testing on Hebrew), and it works fine (not as good as English, but still separate the signals).