Open Luapulu opened 3 years ago
It's because for adaptively inverting operators it's more natural to order by polynomial degree.
Note there's ProductFun
to partially support the matrix-of-coefficients way of thinking. There is old code for inversion for rank-2 PDEs:
https://github.com/JuliaApproximation/PDESchurFactorization.jl
ClassicalOrthogonalPolynomials.jl will eventually have better support for working with matrix of coefficients. But for now I'd suggest doing it by hand.
If you have a basis of Chebyshev Polynomials, for example, that looks like this:
These are stored like so
My question is: why? Would it not be more natural to simply keep using the matrix? Or perhaps reshape the matrix to a vector, so columns are concatenated together?
In my case, using the matrix directly means I don't have a differentiation operator with
1e12
entries, but rather one with1e6
entries, which is the difference between feasible and impossible. (The reduction occurs because in the matrix way of doing things, I can use the 1D operators on each column/row, which amounts to a simple matrix matrix multiplication).Of course, this comes at the cost of doubling the memory, but it seems more than worth it to double the memory at this stage if you can save many orders of magnitude in memory/time when building operators and applying them.