sigsep / open-unmix-pytorch

Open-Unmix - Music Source Separation for PyTorch
https://sigsep.github.io/open-unmix/
MIT License
1.27k stars 191 forks source link

Obtaining weights for streaming implementation #14

Closed tommy-fox closed 5 years ago

tommy-fox commented 5 years ago

I'm interested in implementing a real-time, streaming version of the separation method.

Do you have any advice on how to extract the model weights for this?

Would it be best to retrain, and save the weights during training?

aadibajpai commented 5 years ago

This might be of help to you, https://github.com/sigsep/open-unmix-pytorch#bidirectional-lstm

aliutkus commented 5 years ago

@aadibajpai thanks for providing this link. we indeed do not release weights for a forward LSTM network. However, as mentioned in the doc there, it is possible to train open-unmix for this. we would be happy to reference your implementation.

does these solve the issue?

best,

A.

tommy-fox commented 5 years ago

That is very helpful, thank you! I will let you know how it turns out.

Tommy

tommy-fox commented 4 years ago

Hi, I just wanted to give you a quick update. I have a streaming version of Open-Unmix running, using unidirectional models for music (trained with MUSDB) and also speech. I will leave a link to the project at the bottom. I tried to cite your work properly but please let me know if I missed anything or if you'd prefer any changes to citation or otherwise.

Open-Unmix was a great help in my learning process on many fronts. Thanks for all of your work!

Open-Unmix Stream

aliutkus commented 4 years ago

that's great! do you want the sources for the figures so that you could mention a LSTM instead of a BLSTM ? we may reuse this and let you know how it turns out for us. what about performance? did you check the metrics you get? talk soon