Open Jacob-Stevens-Haas opened 1 year ago
The implementation is based on a stability-promotion term with a 1/nu in front. So adjust the value of nu to be smaller to make things stable (the stability is not guaranteed, and if the data sucks, this will be challenging anyways).
Ohhh, I think I'm looking for model.optimizer.coef_full_
, which is -1e-8
in this example? I think there may be an issue with the docstring mixing up its variables:
Attempts to minimize the objective function
$$ 0.5\|y-Xw\|^2_2 + \lambda R(u)
- (0.5 / \nu)\|w-u\|^2_2 \ $$
subject to
$$ Cu = d, Du = e, w \text{ negative definite} $$
where $R(u)$ is a regularization function
Later, in the argument description for nu
, it says "Decreasing nu encourages u and v to be close", and I'm thinking these refer to variable names in a paper, rather than the docstring? Then later it says self.coef = v (`self.coef=coef_sparse) and self.coef_full_ = u (
self.coeffull=coef_negative_definite`). No mention of w, and it seems u has switched to meaning w in the equation.
In trying to tease out the u vs v vs w, I saw StableLinearSR3._update_A()
creates coef_negative_definite
. Is it actually equal to partial minimization of $w$ in the above equation?
@akaptano, I tried fitting a linear model with the
StableLinearSR3
optimizer, but got some positive eigenvalues. Even fitting the model on a much smaller subset of the data shows a positive eigenvalue. This shouldn't happen, correct?Reproducing code example:
Result
Since the coefficient is a scalar, the eigenvalue is just the coefficient, which is greater than zero.
PySINDy/Python version information:
current
master