JuliaApproximation / ApproxFunBase.jl

Core functionality of ApproxFun
MIT License
12 stars 13 forks source link

Why won't `AbstractMatrix(::SubOperator)` return a `BandedBlockBandedMatrix`? #81

Open Luapulu opened 3 years ago

Luapulu commented 3 years ago

I see that the line to return such a matrix is commented out below. Why?

https://github.com/JuliaApproximation/ApproxFunBase.jl/blob/25ee448cbde61ef8e01f5c9d5af63ac3364a751c/src/Operators/Operator.jl#L776

I'd like to preallocate large operator matrices and currently the RaggedMatrix type seems to be the default. The issue is that RaggedMatrix uses too much memory. Currently, I'm just manually using BandedBlockBandedMatrix, but that's not as nice as just being able to use AbstractMatrix(view(SomeOperator, FiniteRange, 1:N))), since I don't want to have to care about the exact memory layout myself.

dlfivefifty commented 3 years ago

TBH I don't remember. I'm redesigning a lot of this in ClassicalOrthogonalPolynomials.jl. Maybe what you are trying to do works there already?

Luapulu commented 3 years ago

I need to differentiate and integrate on a 2D domain in Chebyshev tensor product basis. I believe ClassicalOrthogonalPolynomials.jl can't do that yet?

dlfivefifty commented 3 years ago

You are right it's not built in, but perhaps you can build the operators you want via:

julia> using ClassicalOrthogonalPolynomials, LazyBandedMatrices

julia> T,U = ChebyshevT(), ChebyshevU();

julia> x = axes(T,1); D = Derivative(x);

julia> D_x = KronTrav(U\D*T, U\T)
ℵ₀×ℵ₀-blocked ℵ₀×ℵ₀ KronTrav{Float64, 2, BandedMatrix{Float64, Adjoint{Float64, InfiniteArrays.InfStepRange{Float64, Float64}}, InfiniteArrays.OneToInf{Int64}}, BandedMatrix{Float64, LazyArrays.ApplyArray{Float64, 2, typeof(vcat), Tuple{Fill{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}, Zeros{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}, LazyArrays.ApplyArray{Float64, 2, typeof(hcat), Tuple{Ones{Float64, 2, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}, Fill{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}}}}}, InfiniteArrays.OneToInf{Int64}}, Tuple{BlockedUnitRange{RangeCumsum{Int64, InfiniteArrays.OneToInf{Int64}}}, BlockedUnitRange{RangeCumsum{Int64, InfiniteArrays.OneToInf{Int64}}}}}:
  ⋅   │   ⋅   1.0  │   ⋅    0.0   ⋅   │   ⋅   -0.5   ⋅    ⋅   │   ⋅     ⋅     ⋅    ⋅    ⋅   │   ⋅     ⋅     ⋅ 
 ─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼─────────────────  …  
  ⋅   │   ⋅    ⋅   │    ⋅   0.5   ⋅   │   ⋅    0.0   ⋅    ⋅   │   ⋅   -0.5    ⋅    ⋅    ⋅   │   ⋅     ⋅     ⋅      
  ⋅   │   ⋅    ⋅   │    ⋅    ⋅   2.0  │   ⋅     ⋅   0.0   ⋅   │   ⋅     ⋅   -1.0   ⋅    ⋅   │   ⋅     ⋅     ⋅ 
 ─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼─────────────────     
  ⋅   │   ⋅    ⋅   │    ⋅    ⋅    ⋅   │   ⋅    0.5   ⋅    ⋅   │   ⋅    0.0    ⋅    ⋅    ⋅   │   ⋅   -0.5    ⋅      
  ⋅   │   ⋅    ⋅   │    ⋅    ⋅    ⋅   │   ⋅     ⋅   1.0   ⋅   │   ⋅     ⋅    0.0   ⋅    ⋅   │   ⋅     ⋅   -1.0     
  ⋅   │   ⋅    ⋅   │    ⋅    ⋅    ⋅   │   ⋅     ⋅    ⋅   3.0  │   ⋅     ⋅     ⋅   0.0   ⋅   │   ⋅     ⋅     ⋅ 
 ─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼─────────────────  …  
  ⋅   │   ⋅    ⋅   │    ⋅    ⋅    ⋅   │   ⋅     ⋅    ⋅    ⋅   │   ⋅    0.5    ⋅    ⋅    ⋅   │   ⋅    0.0    ⋅      
  ⋅   │   ⋅    ⋅   │    ⋅    ⋅    ⋅   │   ⋅     ⋅    ⋅    ⋅   │   ⋅     ⋅    1.0   ⋅    ⋅   │   ⋅     ⋅    0.0     
  ⋅   │   ⋅    ⋅   │    ⋅    ⋅    ⋅   │   ⋅     ⋅    ⋅    ⋅   │   ⋅     ⋅     ⋅   1.5   ⋅   │   ⋅     ⋅     ⋅      
  ⋅   │   ⋅    ⋅   │    ⋅    ⋅    ⋅   │   ⋅     ⋅    ⋅    ⋅   │   ⋅     ⋅     ⋅    ⋅   4.0  │   ⋅     ⋅     ⋅ 
 ─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼─────────────────     
  ⋅   │   ⋅    ⋅   │    ⋅    ⋅    ⋅   │   ⋅     ⋅    ⋅    ⋅   │   ⋅     ⋅     ⋅    ⋅    ⋅   │   ⋅    0.5    ⋅   …  
  ⋅   │   ⋅    ⋅   │    ⋅    ⋅    ⋅   │   ⋅     ⋅    ⋅    ⋅   │   ⋅     ⋅     ⋅    ⋅    ⋅   │   ⋅     ⋅    1.0     
  ⋅   │   ⋅    ⋅   │    ⋅    ⋅    ⋅   │   ⋅     ⋅    ⋅    ⋅   │   ⋅     ⋅     ⋅    ⋅    ⋅   │   ⋅     ⋅     ⋅      
  ⋅   │   ⋅    ⋅   │    ⋅    ⋅    ⋅   │   ⋅     ⋅    ⋅    ⋅   │   ⋅     ⋅     ⋅    ⋅    ⋅   │   ⋅     ⋅     ⋅      
  ⋅   │   ⋅    ⋅   │    ⋅    ⋅    ⋅   │   ⋅     ⋅    ⋅    ⋅   │   ⋅     ⋅     ⋅    ⋅    ⋅   │   ⋅     ⋅     ⋅ 
 ─────┼────────────┼──────────────────┼───────────────────────┼─────────────────────────────┼─────────────────     
 ⋮                         ⋮                         ⋮                          ⋮                ⋱  

julia> D_y = KronTrav(U\T, U\D*T)
ℵ₀×ℵ₀-blocked ℵ₀×ℵ₀ KronTrav{Float64, 2, BandedMatrix{Float64, LazyArrays.ApplyArray{Float64, 2, typeof(vcat), Tuple{Fill{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}, Zeros{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}, LazyArrays.ApplyArray{Float64, 2, typeof(hcat), Tuple{Ones{Float64, 2, Tuple{Base.OneTo{Int64}, Base.OneTo{Int64}}}, Fill{Float64, 2, Tuple{Base.OneTo{Int64}, InfiniteArrays.OneToInf{Int64}}}}}}}, InfiniteArrays.OneToInf{Int64}}, BandedMatrix{Float64, Adjoint{Float64, InfiniteArrays.InfStepRange{Float64, Float64}}, InfiniteArrays.OneToInf{Int64}}, Tuple{BlockedUnitRange{RangeCumsum{Int64, InfiniteArrays.OneToInf{Int64}}}, BlockedUnitRange{RangeCumsum{Int64, InfiniteArrays.OneToInf{Int64}}}}}:
  ⋅   │  1.0  0.0  │  0.0  0.0  0.0  │  0.0  0.0  -0.5   ⋅   │   ⋅    ⋅    ⋅      ⋅    ⋅   │   ⋅    ⋅    ⋅      ⋅ 
 ─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼──────────────────────  …  
  ⋅   │   ⋅    ⋅   │  2.0  0.0  0.0  │  0.0  0.0   0.0   ⋅   │  0.0  0.0  -1.0    ⋅    ⋅   │   ⋅    ⋅     ⋅     ⋅      
  ⋅   │   ⋅    ⋅   │   ⋅   0.5  0.0  │   ⋅   0.0   0.0  0.0  │   ⋅   0.0   0.0  -0.5   ⋅   │   ⋅    ⋅     ⋅     ⋅ 
 ─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼──────────────────────     
  ⋅   │   ⋅    ⋅   │   ⋅    ⋅    ⋅   │  3.0  0.0   0.0   ⋅   │  0.0  0.0   0.0    ⋅    ⋅   │  0.0  0.0  -1.5    ⋅      
  ⋅   │   ⋅    ⋅   │   ⋅    ⋅    ⋅   │   ⋅   1.0   0.0  0.0  │   ⋅   0.0   0.0   0.0   ⋅   │   ⋅   0.0   0.0  -1.0     
  ⋅   │   ⋅    ⋅   │   ⋅    ⋅    ⋅   │   ⋅    ⋅    0.5  0.0  │   ⋅    ⋅    0.0   0.0  0.0  │   ⋅    ⋅    0.0   0.0
 ─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼──────────────────────  …  
  ⋅   │   ⋅    ⋅   │   ⋅    ⋅    ⋅   │   ⋅    ⋅     ⋅    ⋅   │  4.0  0.0   0.0    ⋅    ⋅   │  0.0  0.0   0.0    ⋅      
  ⋅   │   ⋅    ⋅   │   ⋅    ⋅    ⋅   │   ⋅    ⋅     ⋅    ⋅   │   ⋅   1.5   0.0   0.0   ⋅   │   ⋅   0.0   0.0   0.0     
  ⋅   │   ⋅    ⋅   │   ⋅    ⋅    ⋅   │   ⋅    ⋅     ⋅    ⋅   │   ⋅    ⋅    1.0   0.0  0.0  │   ⋅    ⋅    0.0   0.0     
  ⋅   │   ⋅    ⋅   │   ⋅    ⋅    ⋅   │   ⋅    ⋅     ⋅    ⋅   │   ⋅    ⋅     ⋅    0.5  0.0  │   ⋅    ⋅     ⋅    0.0
 ─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼──────────────────────     
  ⋅   │   ⋅    ⋅   │   ⋅    ⋅    ⋅   │   ⋅    ⋅     ⋅    ⋅   │   ⋅    ⋅     ⋅     ⋅    ⋅   │  5.0  0.0   0.0    ⋅   …  
  ⋅   │   ⋅    ⋅   │   ⋅    ⋅    ⋅   │   ⋅    ⋅     ⋅    ⋅   │   ⋅    ⋅     ⋅     ⋅    ⋅   │   ⋅   2.0   0.0   0.0     
  ⋅   │   ⋅    ⋅   │   ⋅    ⋅    ⋅   │   ⋅    ⋅     ⋅    ⋅   │   ⋅    ⋅     ⋅     ⋅    ⋅   │   ⋅    ⋅    1.5   0.0     
  ⋅   │   ⋅    ⋅   │   ⋅    ⋅    ⋅   │   ⋅    ⋅     ⋅    ⋅   │   ⋅    ⋅     ⋅     ⋅    ⋅   │   ⋅    ⋅     ⋅    1.0     
  ⋅   │   ⋅    ⋅   │   ⋅    ⋅    ⋅   │   ⋅    ⋅     ⋅    ⋅   │   ⋅    ⋅     ⋅     ⋅    ⋅   │   ⋅    ⋅     ⋅     ⋅ 
 ─────┼────────────┼─────────────────┼───────────────────────┼─────────────────────────────┼──────────────────────     
 ⋮                        ⋮                         ⋮                          ⋮                     ⋱  
Luapulu commented 3 years ago

That looks interesting. Thanks, for the help.

dlfivefifty commented 3 years ago

Btw I'd love to get kronecker products working in ContinuumArrays.jl/ClassicalOrthogonalPolynomials.jl. If you felt like helping with that I can walk you through the planned design

Luapulu commented 3 years ago

Is the plan to replace ApproxFunOrthogonalPolynomials.jl with ClassicalOrthogonalPolynomials.jl?

dlfivefifty commented 3 years ago

Yes that is the plan.

  1. ContinuumArrays.jl is "functions-as-vectors". This is a more natural language for doing linear algebra, e.g., solving linear ODEs, taking subspaces of bases.
  2. ApproxFun.jl will be "functions-as-functions". It will call ContinuumArrays.jl for operations that are linear (e.g., solving ODEs, addition, etc.) so will be a thin wrapper.
  3. ClassicalOrthogonalPolynomials.jl will add the support for orthogonal polynomials

Note there's a WIP PR to make the change

https://github.com/JuliaApproximation/ApproxFunBase.jl/pull/44

The biggest hold up is there is a lot of functionality missing in terms of analogues of ArraySpace and TensorSpace. Though this is moving quickly, and things that have been updated are much faster. E.g. building multiplication operators via Clenshaw is 10x faster in ClassicalOrthogonalPolynomials.jl: https://github.com/MikaelSlevinsky/FastTransforms/pull/74

Luapulu commented 3 years ago

So, my plan at the moment is to replace all my current high level ApproxFun use with FastTransforms.jl for the transforms from and to the two kinds of Chebyshev Bases and to replace all the derivatives, integrals and conversions with matrix multiplications so I can preallocate the matrices. Whether I use ApproxFunOrthogonalPolynomials.jl or ClassicalOrthogonalPolynomials.jl to generate the matrices in the first place shouldn't matter too much, so it's not such a high priority to switch over for me on that front.

However, I do have a few little things to add to FastTransforms.jl. For example, a method for the Padua Transform that allows me to do something like Padua_transform(into, from, plan) so I can preallocate the output. May be some documentation/comments, too. And if you have any big changes planned for FastTransforms.jl, that would be quite interesting, since the transforms are the biggest limiting factor for my use case at the moment.

If you're curious, I'm the student working on PoincareInvariants.jl.

Luapulu commented 3 years ago

Actually, my plan doesn't quite work out because the transform for Chebyshev polynomials is in ApproxFunOrthogonalPolynomials.jl as I understand? There'd also a version in FastTransforms, but it's not used you said?

So, one of the transforms will still come from ApproxFun I guess.

dlfivefifty commented 3 years ago

The transforms all come from FastTransforms.jl, it's just a question of which transform is used.

Note the transforms are also used in ClassicalOrthogonalPolynomials.jl