Closed robbibt closed 2 months ago
That's not good. I'll try to track it down.
Hey @robbibt,
A few separate things happening, all of which are related to when I updated arguments.py
to include (a lot) more constituents. Some of these changes are permanent as they're fixes to old bugs. Some are new bugs that will be fixed in #336.
eta2
have been corrected. This only affected when inferring from minor constituents.L2b
when L2
is known. In 2.1.4, I didn't infer this constituent if L2
was available.J1
to use the default and not the one for FES models. This should be fixed with the PR.Thanks for pointing out the problem. I don't think I would have caught it without the nudge. I have a few things I need to check, but I think with the bug fixes I'll put out a release sooner rather than later.
Fantastic, thanks @tsutterley - this wasn't a problem for us, just unexpected so I thought it was worth raising. Glad to hear it helped clear out some hidden bugs!
I recently noticed that some of our FES2014 integration tests are failing, and realised that
pyTMD
is producing different results in2.1.5
vs2.1.4
, sometimes by over 10 cm:Longer comparison scatterplot:
I'm not yet sure if the
2.1.5
results are better or worse - just that they are different. I haven't tested this exhaustively across all the models, but so far FES models are the only ones that seem different (EOT20 and TPXO9 seem unchanged).