glotaran / pyglotaran-examples

This repository hold examples showcasing the use of the pyglotaran package
MIT License
5 stars 7 forks source link

🩹 Fix bad ports (2) #104

Closed s-weigand closed 4 months ago

s-weigand commented 5 months ago

Follow up PR for #102 with refinements based on comparison errors and the work in #103

Change summary

Checklist

sourcery-ai[bot] commented 5 months ago

🧙 Sourcery is reviewing your pull request!


Tips - Trigger a new Sourcery review by commenting `@sourcery-ai review` on the pull request. - You can change your review settings at any time by accessing your [dashboard](https://sourcery.ai/dashboard): - Enable or disable the Sourcery-generated pull request summary or reviewer's guide; - Change the review language; - You can always [contact us](mailto:support@sourcery.ai) if you have any questions or feedback.
jsnel commented 4 months ago

Found what the cause of the CI fail (in sim-3d-weight) here was.

The issues is/was that the parameter values for (some of the element of) the activation would go wild. Specifically with non-negative=true and optimization method=lm.

Ultimatelly this would lead to: Parameter(label='irf.width2', value=0.0, non_negative=True) and other wierd values such as: Parameter(label='irf.center3', value=5.695921809916089e+60, non_negative=True)

The value of 0.0 for irf.width2 causes a division by zero in calculate_matrix_gaussian_activation_on_index specifically in beta = (t_n - center) / (width * SQRT2) (at least when compiled in numba, in pure python it only results in a runtime warning)

The exact 0 value is most likely because the original estimate (per the optimizerI for this value is -96, which with {"non-negative": True} becomes 0.

And that this value is going wild, is because scaling is not working properly :smile: because of https://github.com/glotaran/pyglotaran/issues/1463

So we find ourself in a proper chicken-egg situation here, so for the CI to pass here, I must first merge in https://github.com/glotaran/pyglotaran/pull/1461

jsnel commented 4 months ago

We went from 214 failed, 363 passed, 49 warnings here to 203 failed, 374 passed, 49 warnings here to 159 failed, 418 passed, 49 warnings now here

and all tests are running again. ^^