Closed nehalsinghmangat closed 1 month ago
Great, thanks Nehal! I think you'll need to change the exit criteria from the loop. See if there's a way to do this elegantly that maintains the current behavior (I think the current option is AIC, and that should be made explicit).
If you go about doing a PR, a few other things that might make sense, if you have time:
max_iter, normalize_columns, copy_X, and unbias
into **kwargs
that get passed to `super().init(self
as @staticmethod
or make them functionsJust as a heads up, in your example, you need to pass t=t_sim
to the SINDy.fit
calls - that's why you're finding 0.2, instead of 2.0, as the coefficient. This is sort of what I was talking about with making the MWE down to the level of the optimizer, rather than the SINDy object. Here's another example, in addition to the one via email, that you might consider as a test in this PR:
import numpy as np
import pysindy as ps
x = np.zeros((10, 2))
y = np.ones((10, ))
x[:, 0] = y
x += np.random.normal(size=(10,2), scale=1e-2)
print("SSR:", ps.SSR().fit(x, y).coef_)
print("STLSQ:", ps.STLSQ().fit(x, y).coef_)
SSR: [[ 1.02626566 -0.04611932]]
STLSQ: [[1.02683949 0. ]]
Description
When running the SSR algorithm, I notice that it always returns the first dynamic model it generates. This is, of course, in stark contrast to what the optimization behavior should be, as described in the paper that first introduced this method: Sparse learning of stochastic dynamical equations
Reproducing code example:
Screenshot of reproducing code output:
PySINDy/Python version information:
1.7.6.dev325+gf2dfe3e.d20240602 3.10.12 [GCC 11.4.0]