Closed EthanJamesLew closed 5 months ago
Optimize the function
$$ \text{argmin}A \sum {i=1}^{n} (Ay_i - y_i')^2 = || Y' - AY ||^2_2 $$
The solution to this is the penrose inverse
$$ A = \left(Y^T Y\right)^{-1} Y^T $$
We use the SVD to reduce rank.
$$ \text{argmin}A \sum {i=1}^{n} w_i (Ay_i - yi')^2 = \sum {i=1}^{n} (\sqrt{w_i} Ay_i - \sqrt{w_i} yi')^2 = \sum {i=1}^{n} (A\sqrt{w_i}y_i - \sqrt{w_i} y_i')^2 = || W^{\frac{1}{2}}Y' - A W^{\frac{1}{2}} Y ||^2_2 $$
So, the solution is
$$ A = \left(Y^T W^{\frac{1}{2}} Y\right)^{-1} Y^T W^{\frac{1}{2}} $$
@Abdu-Hekal check out the new notebook that does weighted eDMD: https://github.com/EthanJamesLew/AutoKoopman/blob/feature/lie-obs/notebooks/weighted-cost-func.ipynb
Currently, it does not weight state (just points). However, it looks straightforward to extend the weighting matrix to include state weighting.
@EthanJamesLew Looks great! I will test on the falsification framework.
@EthanJamesLew The wdmdc can sometimes run into error due to matrix singularity when computing the inverse in: Atilde = Yp @ V @ np.linalg.pinv(Sigma) @ U.conj().T An easy fix which seems to work is to use pseudo-inverse: Atilde = Yp @ V @ np.linalg.pinv(Sigma) @ U.conj().T otherwise could try to catch the error?
@EthanJamesLew The wdmdc can sometimes run into error due to matrix singularity when computing the inverse in: Atilde = Yp @ V @ np.linalg.pinv(Sigma) @ U.conj().T An easy fix which seems to work is to use pseudo-inverse: Atilde = Yp @ V @ np.linalg.pinv(Sigma) @ U.conj().T otherwise could try to catch the error?
Pinv can be slower though, so we may want to catch the error instead.
@Abdu-Hekal we can likely address this by forcing a reduced rank model to make the block matrix in Sigma invertible. Of course, depending how severely we weighted points (i.e. remove almost all points), we can end up with an error.
Also, @Abdu-Hekal it would be interesting to build an integration testbench for AutoKoopman Falsification so I can run into these interesting edge cases during regressions. Maybe we can collect some representative system id problems for AutoKoopman if running the whole Falsification tool is impractical inside of a GitHub action (MATLAB + Gurobi, plus I imagine a nontrivial amount of compute needed). I can also run these tests locally from time to time to ensure that nothing looks off before releasing.
@Abdu-Hekal we can likely address this by forcing a reduced rank model to make the block matrix in Sigma invertible. Of course, depending how severely we weighted points (i.e. remove almost all points), we can end up with an error.
Interestingly the error occurs when all points are weighted equally for a relatively reasonable weight (0.3 in one instance). Also the error typically occurs when only 1 training trajectory is used. For now I am using a try, except where it first tries inv and then falls back on pinv if it fails.
Also, @Abdu-Hekal it would be interesting to build an integration testbench for AutoKoopman Falsification so I can run into these interesting edge cases during regressions. Maybe we can collect some representative system id problems for AutoKoopman if running the whole Falsification tool is impractical inside of a GitHub action (MATLAB + Gurobi, plus I imagine a nontrivial amount of compute needed). I can also run these tests locally from time to time to ensure that nothing looks off before releasing.
Yes I agree! I will aim to generate a suite of (interesting) trajectories for each model and a set of corresponding weights (once I land on a suitable weighting strategy), that we can run in Github action. Would be best to have an iterative approach as well to mimic the falsification loop, where a single trajectory is added every iteration and a Koopman model is learnt.
Closing because of #88
Summary
Use weighting for the DMD objective function, not just the hyperparameter search. See #84
CC @Abdu-Hekal