Closed db091756 closed 4 months ago
@db091756 what is blocking this? Please tag the relevant tickets in the description or as a comment.
Currently blocked by refactoring of reduction.py #581. I don't want to start working on integrating this into the CMMD alg until I know how the reduction is going to be formatted.
@db091756 update this ticket to indicate its scope is about choosing which methods we will add, and then add a separate ticket for each of these.
Experimentation with an implementation of algorithm 5.5. in the linked branch of #640 seems to suggest that this method is not accurate enough for our purposes as kernel matrices do not satisfy numerical positive semi defininiteness to the required degree.
Algorithm 5.6. will not reduce memory cost by an order of magnitude as we have to compute the kernel matrix anyway at a memory cost of n^2
What's the new feature?
Various algorithms in [1] exist to produce approximate eigen-decompositions of matrices.
That is, given some hyper-parameter $r > 0$, one can derive an approximate eigen-decomposition of a matrix $A \in \mathbb{R}^{n \times n}$ such that we have $A \approx U\Lambda U^T$ where $\Lambda \in \mathbb{R}^{r \times r}$ is the diagonal matrix of eigenvalues and $U \in \mathbb{R}^{n \times r}$ is the ortho-normal matrix of eigenvectors.
From this, it follows that we have $A^{-1} \approx U\Lambda^{-1} U^T$.
The algorithms that I think we should consider adding are:
I suggest that implementing 5.3. and 5.5. should be the priority, with 5.6. next.
References [1] - https://arxiv.org/abs/0909.4061
What value does this add?
In [1] they show that their approximate algorithms are $\mathcal{O}(rn^2)$ with $r \ll n$ which is much faster than the usual $\mathcal{O}(n^3)$ cost of matri inversion via eigen-decomposition. This will therefore make the CMMD related algorithms much faster.
Is there an alternative you've considered?
No response
Additional context
No response