Open nilsnevertree opened 1 year ago
We could reduce computation time by only using a certain slice of the arrays to perform the sklearn.LinearRegression()
on.
Thus setting an $\epsilon$ for which the kernel is set to 0 and no regression should be performed.
And we could try to vectorize the local linear regression problem, as the calculations are indepent of each other.
For x
haveing 1000 elements along the time axis we get this:
kalman.Kalman_SEM()
100%|██████████| 10/10 [00:01<00:00, 5.27it/s]
kalman_time_dependent.Kalman_SEM_time_dependent()
100%|██████████| 10/10 [00:08<00:00, 1.12it/s]
It might help to use np.einsum
for the local linear interpolation to speed things up.
The Kalman algorithms using time independent Model
M
are very slow! It should be clear that the performance in of order $O(n) \cdot O(\text{classic})$ withn
being the timesteps, andclassic
refering to the order of computation time of the classic Kalman algorithms.