Open BreezeWhite opened 3 years ago
Hi @BreezeWhite
Thank you for reporting this issue.
Indeed the mechanism for separation is a bit different between the librosa
and tensorflow
backend.
In the tensorflow
backend, an infinite tensorflow dataset
is created to feed a tensorflow estimator
that performs the predictions. The dataset is fed with new data each time the separate
(or separate_to_file
) method is called. Thus there is an underlying graph for this dataset that is never closed and that may be responsible for the issue you encountered.
Note that on GPU, the tensorflow
backend is much faster for performing separation, so it would be a pity to use the librosa
backend instead.
I can't see a proper workaround at the moment besides what you suggested which seems to result in speed issues. Rewriting the way spleeter uses tensorflow for performing separation may actually be necessary but I can't see a better way so far. So any help on this topic would be appreciated.
Hi Taufeeque,
Thanks for the edit, this looks much better.
May we use this as my final abstract?
I don't just want to use your edits without consenting with you first.
Regards: Donavin.
On 2020/12/01 18:22, Romain Hennequin wrote:
Hi @BreezeWhite https://github.com/BreezeWhite Thank you for reporting this issue. Indeed the mechanism for separation is a bit different between the |librosa| and |tensorflow| backend. In the |tensorflow| backend, an infinite tensorflow |dataset| is created to feed a tensorflow |estimator| that performs the predictions. The dataset is fed with new data each time the |separate| (or |separate_to_file|) method is called. Thus there is an underlying graph for this dataset that is never closed and that may be responsible for the issue you encountered.
Note that on GPU, the |tensorflow| backend is much faster for performing separation, so it would be a pity to use the |librosa| backend instead.
I can't see a proper workaround at the moment besides what you suggested which seems to result in speed issues. Rewriting the way spleeter uses tensorflow for performing separation may actually be necessary but I can't see a better way so far. So any help on this topic would be appreciated.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/deezer/spleeter/issues/524#issuecomment-736659356, or unsubscribe https://github.com/notifications/unsubscribe-auth/ANZF77Y7BYW4LEGKGYMTA73SSUJ3RANCNFSM4UHX5ZAQ.
Description
I'm using
spleeter
as a python package in my code, and when using tensorflow as thestft_backend
, the default global tensorflow graph will be finalized after spleeter finished the separation, leading to a situation that tensorflow models are unable to be initialized anymore in further process.Step to reproduce
pip install spleeter
spleeter
python pacakageRuntimeError: Graph is finalized and cannot be modified.
errorVersion
spleeter
- 2.0.1tensorflow
- 2.3.0Steps to reproduce:
Output
Environment
Additional context
Currently my workaround is to set
sep._params["stft_backend"] = "librosa"
to avoid using the tensorflow backend. I've checked the code that in librosa approach, new local graph will be initialized, and later separation will all within that graph. The default global graph will thus not being finalized, and all would be fine when initializing new tensorflow models in later process.I've also tried to initialize the tensorflow model after separation like following:
though it succeeded, but when I tried to load some pre-trained models inside the
with
, it was extremely slow, compared to load the model that is not in the graph mode. It usually takes only a few seconds to load the pre-trained models, but it takes up to 8~10 minutes to load the same model in graph mode.Spleeter is a great tool and pretty easy to getting start that I've rarely found from other open sources. Hope this issue could help others when they enconter the same problem. It took me a few days getting around the bug, and hope there will be a good solution in the future.
Thanks.