It could be interesting to consider how could CurvLinOps support for banded matrices, since they are fairly common in many applications involving large-scale linear operators.
For example, solving a least squares problem involving differential operators involves solving a linear system in the form A @ x = b where A is typically very large dimensional, tridiagonal and Hermitian (and often PD). Solving this system with x = inv(A) @ b for tridiagonal opearators (and in general Banded operators) can be done efficiently as highlighted e.g. in the following scipy routines:
In that case, for a banded linop of shape (n, n), and k bands, the solver expects a matrix of shape (k, n), and the measurement b, providing x as the solution. It seems pretty efficient in terms of memory and runtime.
The direct relation between such operators and DL is not clear to me right now, but maybe it could be anded approximations of arbitrary linops and their inverses (e.g. is a tridiagonal Hessian approx much better than just diagonal? Is it better than a rank-k approximation?). More than a feat request this is a prompt for discussion
May be pertinent since there is already another feature request about dealing with diagonal approximations, namely Diag++ (see https://github.com/f-dangel/curvlinops/issues/144)
It could be interesting to consider how could CurvLinOps support for banded matrices, since they are fairly common in many applications involving large-scale linear operators.
For example, solving a least squares problem involving differential operators involves solving a linear system in the form
A @ x = b
whereA
is typically very large dimensional, tridiagonal and Hermitian (and often PD). Solving this system withx = inv(A) @ b
for tridiagonal opearators (and in general Banded operators) can be done efficiently as highlighted e.g. in the following scipy routines:https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solve_banded.html https://docs.scipy.org/doc/scipy/reference/generated/scipy.linalg.solveh_banded.html
In that case, for a banded linop of shape
(n, n)
, andk
bands, the solver expects a matrix of shape(k, n)
, and the measurementb
, providingx
as the solution. It seems pretty efficient in terms of memory and runtime.The direct relation between such operators and DL is not clear to me right now, but maybe it could be anded approximations of arbitrary linops and their inverses (e.g. is a tridiagonal Hessian approx much better than just diagonal? Is it better than a rank-k approximation?). More than a feat request this is a prompt for discussion