JuliaApproximation / ApproxFunBase.jl

Core functionality of ApproxFun
MIT License
12 stars 13 forks source link

Why are coefficients in tensor product bases represented the way they are? #86

Open Luapulu opened 3 years ago

Luapulu commented 3 years ago

If you have a basis of Chebyshev Polynomials, for example, that looks like this:

[T00 T01 T02;
 T10 T11  0 ;
 T20  0   0 ]

These are stored like so

[T00, T01, T10, T02, T11, T20]

My question is: why? Would it not be more natural to simply keep using the matrix? Or perhaps reshape the matrix to a vector, so columns are concatenated together?

In my case, using the matrix directly means I don't have a differentiation operator with 1e12 entries, but rather one with 1e6 entries, which is the difference between feasible and impossible. (The reduction occurs because in the matrix way of doing things, I can use the 1D operators on each column/row, which amounts to a simple matrix matrix multiplication).

Of course, this comes at the cost of doubling the memory, but it seems more than worth it to double the memory at this stage if you can save many orders of magnitude in memory/time when building operators and applying them.

dlfivefifty commented 3 years ago

It's because for adaptively inverting operators it's more natural to order by polynomial degree.

Note there's ProductFun to partially support the matrix-of-coefficients way of thinking. There is old code for inversion for rank-2 PDEs:

https://github.com/JuliaApproximation/PDESchurFactorization.jl

ClassicalOrthogonalPolynomials.jl will eventually have better support for working with matrix of coefficients. But for now I'd suggest doing it by hand.