craffel / mir_eval

Evaluation functions for music/audio information retrieval/signal processing algorithms.
MIT License
603 stars 112 forks source link

fftpack → fft #381

Closed bmcfee closed 3 months ago

bmcfee commented 5 months ago

Fixes #373

This PR implements a number of small improvements to the separation module:

This should altogether get us about a 50% speedup in separation metrics and bring down the test build times considerably. I don't think this will affect accuracy at all. However, we should probably also roll in #376 once the dust settles.


This PR bumps the minimum supported scipy to 1.5.4 (from 1.4.0) to avoid a bug in the next_fast_length implementation for real-valued inputs. 1.5.4 was released in Nov. 2020, while 1.4.0 was Dec. 2019. These are both far enough in the past that I don't consider it a big deal, but I am noting it here in case it warrants discussion.

bmcfee commented 5 months ago

This is interesting. Tests are essentially all passing, except for framewise image on dataset number 5. I believe this case has been a heisenbug in the past, at least on OSX, but it's now affecting all platforms.

What's strange is that the deviation is extremely non-uniform: some platforms it's off by about 7db, some 10, some 23. This leads me to believe the underlying cause is numerical instability in a least squares estimation, so that we're highly sensitive to random initialization and floating point ops, and this example just happens to be particularly good at triggering it. I'm not yet 100% certain, but that seems like the most likely explanation here since all other pasts are passing with the specified 0.01dB tolerance.

bmcfee commented 5 months ago

Interestingly, dataset06 also fails on my dev machine, but only by 0.02dB (#376 issue).

bmcfee commented 5 months ago

Ok, I've dug into this failure a little more. It's definitely a numerical stability issue.

The offending case is ref07 (numbering doesn't line up with fixture numbering since we removed a couple of test cases some years back). In this data, the second reference source goes dead silent in the center of the recording, but with a negative DC: image

When divided up into frames, we get numerical equivalency in framewise metrics everywhere except for one frame:

(proposed rfft branch)

>>> imageframe_scores[key]
[[1.756411590484389,
  1.7597473682072609,
  1.7602869182177234,
  1.7594335093388715,
  1.7595170180901967,
  1.759276007646068,
  1.7591737609125007,
  1.759391260702998,
  1.760218424402917,
  1.7600362671204706],
 [1.6992851561218936,
  -8.959569954271648,
  1.7050670369298637,
  1.704666147437159,
  1.7049902335582687,
  1.7049335634150566,
  1.70516553136305,
  1.7047425405295848,
  1.704666154642886,
  1.7048499625553668]]

compared to our target pre-computed scores:

>>> expected_image_frames[test_data_name]
[[1.7564115904843982,
  1.7597473682073619,
  1.7602869182174874,
  1.7594335093388682,
  1.7595170180900648,
  1.7592760076460983,
  1.759173760912185,
  1.7593912607029725,
  1.7602184244029835,
  1.7600362671204637],
 [1.699285156121919,
  1.7045012461538642,
  1.7050670369289587,
  1.704666147436396,
  1.7049902335583038,
  1.704933563419315,
  1.7051655313634067,
  1.7047425405294456,
  1.7046661546430155,
  1.7048499625703208]]

The diff comes to:

array([[-9.10382880e-15, -1.01030295e-13,  2.36033415e-13,
         3.33066907e-15,  1.31894495e-13, -3.01980663e-14,
         3.15747428e-13,  2.55351296e-14, -6.63913369e-14,
         6.88338275e-15],
       [-2.55351296e-14, -1.06640712e+01,  9.05053810e-13,
         7.62945263e-13, -3.50830476e-14, -4.25837143e-12,
        -3.56603636e-13,  1.39221967e-13, -1.29452005e-13,
        -1.49540380e-11]])

I expect what's happening here is the silent frame of the reference signal is causing a severe instability in the matrix inversion, but not so severe as to trigger an exception here: https://github.com/craffel/mir_eval/blob/6b82ddef3675ba7613aa454f74bc182ee3ade4a3/mir_eval/separation.py#L792-L795

I put in some debug statements, and we're definitely not hitting the lstsq code path. Replacing the np.linalg.solve call by scipy.linalg.solve confirms this via warnings:

/home/bmcfee/git/mir_eval/mir_eval/separation.py:793: LinAlgWarning: Ill-conditioned matrix (rcond=1.10403e-22): result may not be accurate.
  C = scipy.linalg.solve(G, D).reshape(flen, nchan * nsrc, nchan, order="F")

I'm now testing out a fix that re-casts this linalgwarning as an exception so that we can fall back on lstsq. So far it seems promising, updated push coming shortly.

bmcfee commented 5 months ago

Confirmed that the revised linalgwarning catch and fallback fixes the issue in this case, at least on my machine. With any luck, this will resolve #376 and we won't have to raise tolerances further.

Implementing this did require switching out np.linalg for scipy.linalg (due to richer error handling) - I don't think this should be a problem, but I'm noting it here.

bmcfee commented 5 months ago

Well, bsseval is now about a million times slower, I'm guessing because it's properly detecting ill-conditioning now where it wasn't before. It's kind of a miracle that it worked previously, IMO.

bmcfee commented 5 months ago

Adding the :bug: label here because I'm becoming convinced that the present implementation shouldn't be trusted. To summarize where we're at:

The bsseval package works around this by explicitly regularizing the problem, which i think I had advocated for several zillion years ago. I'm not sure if we want to go down that road now, but I also don't see any reasonable way to preserve the current behavior.

codecov-commenter commented 5 months ago

Codecov Report

All modified and coverable lines are covered by tests :white_check_mark:

Project coverage is 95.81%. Comparing base (485a425) to head (5281268).

:exclamation: Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files ```diff @@ Coverage Diff @@ ## main #381 +/- ## ========================================== + Coverage 95.67% 95.81% +0.13% ========================================== Files 19 19 Lines 2936 2936 ========================================== + Hits 2809 2813 +4 + Misses 127 123 -4 ``` | [Flag](https://app.codecov.io/gh/craffel/mir_eval/pull/381/flags?src=pr&el=flags&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Colin+Raffel) | Coverage Δ | | |---|---|---| | [unittests](https://app.codecov.io/gh/craffel/mir_eval/pull/381/flags?src=pr&el=flag&utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Colin+Raffel) | `95.81% <100.00%> (+0.13%)` | :arrow_up: | Flags with carried forward coverage won't be shown. [Click here](https://docs.codecov.io/docs/carryforward-flags?utm_medium=referral&utm_source=github&utm_content=comment&utm_campaign=pr+comments&utm_term=Colin+Raffel#carryforward-flags-in-the-pull-request-comment) to find out more.

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

craffel commented 5 months ago

Thanks for all this sleuthing. Can we get an additional opinion from a source separation eval expert, maybe @faroit?

bmcfee commented 5 months ago

As an aside, I do think we could also significantly accelerate everything here by rewriting it with proper vectorization instead of nested for loops. However, I don't have the fortitude for a yak shave of this magnitude.

faroit commented 5 months ago

@bmcfee @craffel Looking into this now. I'm not sure if it's related but I had some similar issues with fft precisions when torchaudio did changes some months ago....

bmcfee commented 5 months ago

I had some similar issues with fft precisions when torchaudio did changes some months ago

Yeah, it shouldn't be surprising. TBH I've never done a full deep dive on bsseval, but from the parts I've encountered, many of the quantities involved seem fundamentally plagued with numerical instabilities. They do seem fixable, eg by either regularization (ridge regression) or reconceptualizing the definition to account for rank-deficient projections (least squares), but neither of these lead to compatibility-preserving implementations.

My casual bystander opinion here is that we really shouldn't be striving for preserving compatibility here, since the current implementation is obviously a bit broken. OTOH i'm not sure it's mir_eval's job to go about changing metrics without at least some due diligence (ie research and publications) evaluating the changes.

I don't follow the source separation literature closely (maybe I should?) but I do seem to recall seeing at least a couple of papers proposing revisions to bsseval, maybe from @rabitt or some of the MERL folks? Maybe there's something there we could build on to improve the current implementation?

bmcfee commented 4 months ago

Circling back on this one today, and it's still pretty ugly. I've had my head down in the code for a little bit now, and am increasingly convinced that this whole thing needs a rewrite. (Relax: I don't want to do that in this PR.)

To recap things, here's where we're at:

TLDR: fixing things made it both slower and more prone to failure. Is this progress? I don't know. But things are at least marginally more reproducible now, and we don't have silent failures anymore.

So what should we do? I see a few options:

craffel commented 4 months ago

Thanks for the summary. I'm in favor of the ridge solution if it's generally accepted by the community, but I suppose we'd need to be careful about communicating what metric exactly we have implemented here (so that people don't erroneously compare to similar-but-different ridge-free implementations). Secondarily, how did it come to pass that there are parallel codebases?

bmcfee commented 4 months ago

I'm in favor of the ridge solution if it's generally accepted by the community, but I suppose we'd need to be careful about communicating what metric exactly we have implemented here (so that people don't erroneously compare to similar-but-different ridge-free implementations).

I totally agree in princple, but I don't think we can hope to avoid erroneous comparisons here. People are going to copypasta results tables from older papers, and there's really nothing we can do to stop them, with the possible exception of leaving the old-and-busted implementations as is and defining new terms for the stabilized implementations. But then again, the museval code is already stabilized and uses the same terms, so :shrug:

Secondarily, how did it come to pass that there are parallel codebases?

TBH, I think a good portion of this comes down to us not being good enough at staying on top of maintenance and releases. There's also the issue of adding functionality, eg #68 #272 , which we let slide. Forks are inevitable when these things happen.

Then there are total rewrites for speed / platform differences, eg https://github.com/fakufaku/fast_bss_eval/ https://github.com/fgnt/ci_sdr etc.

craffel commented 4 months ago

Hm, yes, maybe based on https://github.com/craffel/mir_eval/issues/68#issuecomment-495227366 it would make most sense to remove it or just call out to museval?

bmcfee commented 4 months ago

Hm, yes, maybe based on #68 (comment) it would make most sense to remove it or just call out to museval?

Lol, I'd forgotten that we'd already had this exact conversation before, and apparently agreed that this neither our circus nor our monkeys. :laughing:

What I think makes sense here is to implement a deprecation cycle, and then remove the module entirely. A shim to museval could be convenient, but I don't like it in general for two reasons:

  1. It now adds a dependency that we didn't have before
  2. museval would then have a dependent (mir_eval) that could lock them into certain APIs or numerical compatibility constraints. This would make it more difficult for museval to evolve in the future.

@faroit what do you think?

bmcfee commented 3 months ago

Alright, I'm closing this PR out to open an new one implementing the deprecation cycle. (And meanwhile relaxing the test tolerance to skirt around heisenbugs.)