Open jacoblam3112 opened 1 month ago
I asked myself that question last year too, and couldn't find a good answer. I asked some respected researchers in the source separation community about it. They also didn't have a satisfying answer, so I decided to develop my own metric that does not have this problem. I called it logWMSE, and you can find more information about it here:
https://github.com/nomonosound/log-wmse-audio-quality/
Some people have also reported success using this metric as a training objective (i.e. as a loss function). You can find code for that here: https://github.com/crlandsc/torch-log-wmse
Yes I've also heard about LogWMSE for such cases. It's already implemented in repo: https://github.com/ZFTurbo/Music-Source-Separation-Training/blob/7e2cc6ecd134f0d108c8c80b0d03ed48f8c549d7/train.py#L124
You can use it like that:
--metrics log_wmse sdr
--metric_for_scheduler log_wmse
Thank you for great answers.
The newly added 'L1_freq' metric is also behaving great in case of silent content (probably other STFT based metrics too, but I've not tested)
Hi, Thank you for building this awesome repo.
I have trained a custom 5-stem model (bass, drum, guitar, vocal, and other) with the 5th stem being 'other'. For some of the evaluation tracks, there is no 'other' stem because the track only contains 4 stems to begin with. For those tracks, my model will predict an other stem, although with a very small amplitude. When I try to use an all '0' as a reference signal and calculate SDR, it gives a very high SDR value (-80dB for example) because of reference signal energy being 0. This ruins the average SDR score of the model. How do we generally handle those cases when we quantitatively evaluate the model performance ?