Let the current model in a backward search be in M1, with k1=1, k2=estimate.
Valid backward moves are k1=1 and k2=1 in M2, and k1=1 and k2=1 in M2.
M2 will never return k1=1 and k2=1. The current logic first checks the largest possible model in the space, then all combinations involving at least one "optionally fixed" parameter. k1 is not "optionally fixed" -- it must be fixed based on the current model. Hence, no model is returned from M2.
This PR changes the logic so that the first check is now "largest possible model in the space, but fix the parameters that were fixed in the other model subspace", and the corresponding fix for the forward moves.
This fixes a major bug in the stepwise logic.
Let the current model in a backward search be in
M1
, withk1=1, k2=estimate
.Valid backward moves are
k1=1 and k2=1
inM2
, andk1=1 and k2=1
inM2
.M2
will never returnk1=1 and k2=1
. The current logic first checks the largest possible model in the space, then all combinations involving at least one "optionally fixed" parameter.k1
is not "optionally fixed" -- it must be fixed based on the current model. Hence, no model is returned fromM2
.This PR changes the logic so that the first check is now "largest possible model in the space, but fix the parameters that were fixed in the other model subspace", and the corresponding fix for the forward moves.
An alternative fix would be to remove these lines from the backward method: https://github.com/PEtab-dev/petab_select/blob/316b4375b20af23c171e8ab4924f54ac3f0aa2bd/petab_select/model_subspace.py#L435-L439