Closed elise1993 closed 9 months ago
Update:
Other possible solutions:
This issue appears to have resolved itself with the latest commits (94e2ef9, d467b8c). Reason unknown.
[stackmax=750, rmax=750]
The model now performs better on validation with increasing stackmax. I believe this is expected, as the linear HAVOK model, representing an approximated Koopman operator, should converge to the true solution as the dimension increases.
To validate this, we can apply HAVOK to a nonlinear system with a known closed-form linearization (a finite-dimensional Koopman-operator), such as:
Speculation: When the closed-form linearization exists, we should not see any improvement with higher HAVOK dimension.
This issue was never resolved. The issue persists when the linear HAVOK model is simulated for a long enough period:
[stackmax=350, rmax=stackmax]
Using a smaller stackmax keeps the model bounded:
[stackmax=40, rmax=stackmax]
Using a large stackmax but small rmax can also bound the model. This allows the model to capture the overarching trends but with less precision:
Perhaps it is possible to regularize the model during construction so that the eigenvalues of the HAVOK matrices remain negative. What impact does this have on performance?
Issue
When a longer memory (as defined by stackmax) is allowed in the HAVOK model, closed-loop forecasts of the system diverge over time.
For reproduction, use the following settings in simulateModel.m:
If stackmax is below 50, the model does not diverge during the specified time.
Potential solution
This problem may be due to unconserved "energy" in the system model, where energy continuously builds up due to errors in the derivative calculations. To prevent this, more symplectic integrators that bound energy, such as RKF78 scheme may be used.