facebookresearch / svoice

We provide a PyTorch implementation of the paper Voice Separation with an Unknown Number of Multiple Speakers In which, we present a new method for separating a mixed audio sequence, in which multiple voices speak simultaneously. The new method employs gated neural networks that are trained to separate the voices at multiple processing steps, while maintaining the speaker in each output channel fixed. A different model is trained for every number of possible speakers, and the model with the largest number of speakers is employed to select the actual number of speakers in a given sample. Our method greatly outperforms the current state of the art, which, as we show, is not competitive for more than two speakers.
Other
1.23k stars 178 forks source link

Evaluating #67

Open cchoi1022 opened 2 years ago

cchoi1022 commented 2 years ago

I know this probably isn't a real issue, I'm probably just having a bad understanding of this part. So to evaluate, we run the following code: python -m svoice.evaluate "path to the model" "path to folder containing mix.json and all target separated channels json files s.json"` I filled out the "" with the parts I think is right, so it looks something like this: python -m svoice.evaluate outputs/exp_/model.pt egs/debug/tr

But I got this error: TypeError: argument of type 'SWave' is not iterable from line 53 in evaluate.py. Did I do something wrong? The training part went smoothly. What can I do to fix this?

I tried bypassing the problem by having model=pkg and not doing stuff like deserializing the model, but it's possible that this just brought more problems. I'm having another issue with separation, where I got this error: ValueError: max() arg is an empty sequence

How do I use it the model to separate?

qalabeabbas49 commented 1 year ago

I think it would be model.th (i.e checkpoint.th). and make sure that egs/debug/tr folder has mix.json, s1.json and s2.json. (ideally you would put the location of test set here.)