Closed dweindl closed 1 week ago
All modified and coverable lines are covered by tests :white_check_mark:
Project coverage is 76.09%. Comparing base (
74ac20d
) to head (2f92af9
).
:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.
Fine for me but it means tools cannot implement model selection via regularization with L1 (admittedly nontrivial on log-scale), because that would contradict the nonzero lower bound.
Thx. No strong opinion. I'm fine with either:
Other opinions?
Also no strong opinion.
We support nominal values of 0 regardless of parameter scale and bounds, so no issue for PEtab Select.
Since excluding 0 would be a possibly-breaking change, I would be in favor of allowing 0.
Since excluding 0 would be a possibly-breaking change, I would be in favor of allowing 0.
Okay, I guess we'll leave it as is for PEtab v1.
For PEtab v2 we'll probably allow 0. For further discussion -> https://github.com/PEtab-dev/PEtab/pull/579
Sampling, e.g. initial start vectors for multi-start optimization, when the lower bound is 0
and the parameter is log10
-scaled, would mean sampling in (-inf, upperBound]
, which isn't sensible.
We could require that, if a user specifies a lowerBound
of 0
for a log10
-scaled parameter, then they should also specify initializationPriorParameters
such that the initial sampling of start vectors can occur on parameterScale
within finite bounds.
A PEtab problem where samples are drawn uniformly from non-finite bounds, should be considered invalid.
from discussion with @dweindl
Closing in favor of https://github.com/PEtab-dev/libpetab-python/pull/278
Previously, bounds were only checked for non-negativity.