Open mikehagerty opened 1 year ago
Offhand, I'm not aware of any changes to the misfit function itself that would cause results to differ by a constant factor. As a guess, data processing changes might be responsible? (e.g. changes to data processing code itself, to default or user supplied data processing parameters, or via the observed waveforms' time discretization)
I will troubleshoot and respond further in the next days or weeks.
Possibly useful for debugging:
This continuous integration test runs every time changes are pushed to GitHub. It evaluates the misfit function over a very coarse grid and checks whether the best fitting solution changes: https://github.com/rmodrak/mtuq/blob/master/tests/test_grid_search_mt.py
This test is probably even more relevant than the above, but not currently included in the continuous integration suite unfortunately. It test compares pure Python and C-accelerated misfit implementations and displays the resulting misfit values: https://github.com/rmodrak/mtuq/blob/master/tests/test_misfit.py
I've just finished following along with the 2022 MTUQ Virtual Workshop. I was able to reproduce all of the plots shown.
One thing I noticed is that, although my waveform fits and mechanisms were very similar, often my L2 misfit (as reported on the waveform plot) was ~1.4 - 2 times larger than what was shown in the workshop video.
I'm just curious - did the misfit calculation change (e.g., by factor of sqrt(2)) since the workshop ?
I know if I had enough patience I could dig through the commit history of the misfit module and try to figure it out, but I thought I'd just ask instead.
Thanks! -Mike