Closed souryadey closed 1 year ago
StatePred
Possible cause:
The entries of the learned Koopman matrix and correspondingly its largest eigenvalue are very small.
Solutions:
StatePred.rank
.Kreg
in StatePred.train_net()
.
Possible cause:
The system is exploding when trying to predict states for negative indexes (i.e. the past).
Mitigation:
This happens when non-dominant eigenvalues of the Koopman matrix are much less than 1. Something between 0 and 1 to the power of a negative integer is very large, e.g. $0.1^{-5} = 100000$. For such values, the non-dominant eigenvalues affect the system (even if the dominant eigenvalue is 1).
Consider increasing precision, i.e. set precision = "double"
in config.py
.
TrajPred
Possible cause:
Trajectory lengths are very long, so errors build up. As an example, if trajectory length is 1001, then 1000 states are generated by applying the linear layer (i.e. the Koopman matrix) to the initial state. If its eigendecomposition has dominant eigenvalue slightly different from 1, the predictions can become inaccurate far along in the trajectory.
Solutions:
Use more training data and/or more hyperparameter searching to tune the linear layer further.
Running
<StatePred/TrajPred>.train_net()
produces very high loss and/or ANAE values. Examples of such ANAE values are $>100\%$.