JuliaApproximation / CompactBases.jl

Julia library for function approximation with compact basis functions
MIT License
16 stars 2 forks source link

Start porting to new QuasiArrays #22

Closed jagot closed 3 years ago

codecov[bot] commented 4 years ago

Codecov Report

Merging #22 (0179540) into master (4d58f0c) will increase coverage by 0.18%. The diff coverage is 87.50%.

:exclamation: Current head 0179540 differs from pull request most recent head 6da63dd. Consider uploading reports for the commit 6da63dd to get more accurate results Impacted file tree graph

@@            Coverage Diff             @@
##           master      #22      +/-   ##
==========================================
+ Coverage   69.59%   69.77%   +0.18%     
==========================================
  Files          21       21              
  Lines        1424     1360      -64     
==========================================
- Hits          991      949      -42     
+ Misses        433      411      -22     
Impacted Files Coverage Δ
src/CompactBases.jl 88.88% <ø> (ø)
src/bspline_derivatives.jl 40.00% <ø> (+9.69%) :arrow_up:
src/bsplines.jl 76.64% <ø> (+3.35%) :arrow_up:
src/fd_operators.jl 54.54% <ø> (-18.79%) :arrow_down:
src/fedvr_operators.jl 86.81% <ø> (-0.15%) :arrow_down:
src/finite_differences.jl 78.32% <ø> (ø)
src/fd_derivatives.jl 68.65% <50.00%> (+7.83%) :arrow_up:
src/fedvr_derivatives.jl 94.44% <100.00%> (+21.71%) :arrow_up:
src/materialize_dsl.jl 87.27% <100.00%> (-1.07%) :arrow_down:
src/restricted_bases.jl 73.80% <100.00%> (ø)
... and 7 more

Continue to review full report at Codecov.

Legend - Click here to learn more Δ = absolute <relative> (impact), ø = not affected, ? = missing data Powered by Codecov. Last update 4d58f0c...6da63dd. Read the comment docs.

dlfivefifty commented 4 years ago

Let me know if you need any help with this. Some points:

  1. All 2-argument multiplications now pipe throw mul -> materialize(Mul(A,B)).
  2. if A or B have ApplyLayout{typeof(*)}, it tries to simplify. Whether this is possible is decided by calling simplifiable.
jagot commented 4 years ago

Very good, thanks! I think I mostly had issues with v = R*c, where R is restricted, in that c was being padded with zeros. This would be good if I could fix.

Also, I noticed that QuasiArrays is stuck at 0.2.3 because ContinuumArrays is not released yet, so I actually don't know if it works or not :) Will have to try locally.

jagot commented 4 years ago

It seems a lot of things broke, so fixing this will be breaking, right? Then I might as well drop Julia < 1.5 at the same time.

jagot commented 4 years ago

The first thing that fails is the following:


B = FiniteDifferences(5,1.0)
x = axes(B,1)
x² = B'QuasiDiagonal(x.^2)*B
@test x² isa Diagonal

I think this breaks because previously B'QuasiDiagonal(x.^2)*B was lowered to a LazyArrays.Mul i.e. a LazyArrays.Applied{<:ApplyStyle, typeof(*), whereas now it is lowered to a ApplyQuasiArray.

(EDIT: I think it is not picking up my custom ApplyStyle anymore, that's why it results in an ApplyQuasiArray, does that make sense?)

Invoking a 3-arg apply instead works as intended:

julia> apply(*, B', QuasiDiagonal(x.^2), B)
5×5 Diagonal{Float64,Array{Float64,1}}:
 1.0   ⋅    ⋅     ⋅     ⋅
  ⋅   4.0   ⋅     ⋅     ⋅
  ⋅    ⋅   9.0    ⋅     ⋅
  ⋅    ⋅    ⋅   16.0    ⋅
  ⋅    ⋅    ⋅     ⋅   25.0

Then the problem of restricted bases being expanded and the coefficient vectors zero-padded remains:

julia> B̃ = B[:, 2:end-1]
Finite differences basis {Float64} on 0.0..6.0 with 5 points spaced by Δx = 1.0, restricted to basis functions 2..4 ⊂ 1..5

julia> c̃ = rand(size(B̃, 2))
3-element Array{Float64,1}:
 0.05671328944836307
 0.5851879217376352
 0.13677518135154187

julia> ṽ = B̃*c̃
ApplyQuasiArray{Float64,1,typeof(*),Tuple{FiniteDifferences{Float64,Int64},Array{Float64,1}}}(*, (Finite differences basis {Float64} on 0.0..6.0 with 5 points spaced by Δx = 1.0,
[0.0, 0.05671328944836307, 0.5851879217376352, 0.13677518135154187, 0.0]))

julia> ṽ isa ContinuumArrays.Expansion
true

Inner products works as intended with this kind of expansion:

julia> ṽ'ṽ
0.36436875118141365

However, explicitly using the lazy representation where the coefficients are not zero-padded do not work with inner products anymore, since I can't seem to take the adjoint of them:

julia> v′ = applied(*, B̃, c̃)
Applied(*,Finite differences basis {Float64} on 0.0..6.0 with 5 points spaced by Δx = 1.0, restricted to basis functions 2..4 ⊂ 1..5,
[0.05671328944836307, 0.5851879217376352, 0.13677518135154187])

julia> v′'v′
ERROR: MethodError: no method matching adjoint(::Applied{LazyArrays.MulStyle,typeof(*),Tuple{QuasiArrays.SubQuasiArray{Float64,2,FiniteDifferences{Float64,Int64},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},UnitRange{Int64}},false},Array{Float64,1}}})
Closest candidates are:
  adjoint(::Missing) at missing.jl:100
  adjoint(::Number) at number.jl:169
  adjoint(::Adjoint{var"#s173",var"#s172"} where var"#s172"<:Union{StaticArrays.StaticArray{Tuple{N},T,1} where T where N, StaticArrays.StaticArray{Tuple{N,M},T,2} where T where M where N} where var"#s173") at /Users/jagot/.julia/packages/StaticArrays/l7lu2/src/linalg.jl:73
  ...
Stacktrace:
 [1] top-level scope at REPL[80]:1
dlfivefifty commented 4 years ago

julia> v′ = applied(*, B̃, c̃) Applied(*,Finite differences basis {Float64} on 0.0..6.0 with 5 points spaced by Δx = 1.0, > restricted to basis functions 2..4 ⊂ 1..5, [0.05671328944836307, 0.5851879217376352, 0.13677518135154187])

Note you should probably work with ApplyQuasiArray(*, B̃, c̃) as the Applied object is really just a middle man

jagot commented 4 years ago

I'm sort of not seeing how I should go about this; returning to the above example

B = FiniteDifferences(5,1.0)
x = axes(B,1)
x² = B'QuasiDiagonal(x.^2)*B # What should be implemented for this to materialize properly?

Should I implement a three-argument simplifiable(*, ...) = Val(true) (why the Val?), and which materialize method should I implement, i.e. taking a Mul argument or a MulQuasiArray argument?

I try to retain the pattern materialize(M) = copyto!(similar(M), M), is that still the way to go?

Ideally apply(*, B', QuasiDiagonal(x.^2), B) should still materialize fully as well, but which lazy object should apply(*, B', QuasiDiagonal(x.^2), B) result in? I assume a MulQuasiArray.

dlfivefifty commented 4 years ago

apply doesn't work for me:

julia> apply(*, B', QuasiDiagonal(x.^2), B)
QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{QuasiArrays.QuasiAdjoint{Float64,FiniteDifferences{Float64,Int64}},QuasiDiagonal{Float64,QuasiArrays.BroadcastQuasiArray{Float64,1,typeof(Base.literal_pow),Tuple{Base.RefValue{typeof(^)},Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Base.RefValue{Val{2}}}}},FiniteDifferences{Float64,Int64}}}(*, (QuasiArrays.QuasiAdjoint{Float64,FiniteDifferences{Float64,Int64}}(Finite differences basis {Float64} on 0.0..6.0 with 5 points spaced by Δx = 1.0), QuasiDiagonal{Float64,QuasiArrays.BroadcastQuasiArray{Float64,1,typeof(Base.literal_pow),Tuple{Base.RefValue{typeof(^)},Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Base.RefValue{Val{2}}}}}(Inclusion(0.0..6.0) .^ 2), Finite differences basis {Float64} on 0.0..6.0 with 5 points spaced by Δx = 1.0))

Where's the code you expect to be called?

jagot commented 4 years ago

It did work, before c2bbf8db3a56b244c4e338dfc4f653d7adbff40c. I tried to modify the @materialize macro to bypass Applied, but I am not sure what the proper flow should be.

jagot commented 4 years ago

Where's the code you expect to be called?

https://github.com/JuliaApproximation/CompactBases.jl/blob/c2bbf8db3a56b244c4e338dfc4f653d7adbff40c/src/fd_operators.jl#L6-L18

This block generates a similar method for a

MulQuasiArray{T,<:Any,<:Tuple{Ac::AdjointBasisOrRestricted{<:AbstractFiniteDifferences},
                              D::QuasiDiagonal,
                              B::BasisOrRestricted{<:AbstractFiniteDifferences}}}

Similarly,

https://github.com/JuliaApproximation/CompactBases.jl/blob/c2bbf8db3a56b244c4e338dfc4f653d7adbff40c/src/fd_operators.jl#L19-L26

this results in a copyto!(A::Diagonal, M::MulQuasiArray{...}).

dlfivefifty commented 4 years ago

I don't see the problem: @simplify works. That is, if I add the line

@simplify  function *(Ac::AdjointBasisOrRestricted{<:AbstractFiniteDifferences},
                        D::QuasiDiagonal,
                        B::BasisOrRestricted{<:AbstractFiniteDifferences})
    T = promote_type(eltype(Ac), eltype(D), eltype(B))
    A = parent(Ac)
    parent(A) == parent(B) ||
        throw(ArgumentError("Cannot multiply functions on different grids"))
    Ai,Bi = indices(A,2), indices(B,2)
    if Ai == Bi
        Diagonal(Vector{T}(undef, length(Ai)))
    else
        m,n = length(Ai),length(Bi)
        offset = Ai[1]-Bi[1]
        BandedMatrix{T}(undef, (m,n), (-offset,offset))
    end
end

Then we get

julia> x² = B'QuasiDiagonal(x.^2)*B
5×5 Diagonal{Float64,Array{Float64,1}}:
 2.24932e-314   ⋅             ⋅             ⋅             ⋅ 
  ⋅            2.27894e-314   ⋅             ⋅             ⋅ 
  ⋅             ⋅            2.21545e-314   ⋅             ⋅ 
  ⋅             ⋅             ⋅            2.21545e-314   ⋅ 
  ⋅             ⋅             ⋅             ⋅            2.21545e-314
jagot commented 4 years ago

But that only returns an uninitialized matrix? I definitely expect B'QuasiDiagonal(x.^2)*B to return

5×5 Diagonal{Float64,Array{Float64,1}}:
 1.0   ⋅    ⋅     ⋅     ⋅
  ⋅   4.0   ⋅     ⋅     ⋅
  ⋅    ⋅   9.0    ⋅     ⋅
  ⋅    ⋅    ⋅   16.0    ⋅
  ⋅    ⋅    ⋅     ⋅   25.0

but I still want to separate the implementations of allocating the output matrix (currently via similar), and the computation of the matrix elements (currently via copyto!).

So, should B'QuasiDiagonal(x.^2)*B -> mul(B', QuasiDiagonal(x.^2), B) -> materialize(Mul(B', QuasiDiagonal(x.^2), B))?

dlfivefifty commented 4 years ago

No: more like

B'QuasiDiagonal(x.^2)*B -> *(*(B', QuasiDiagonal(x.^2)), B)  -> mul(ApplyQuasiArray(*, B', QuasiDiagonal(x.^2)), B) -> _simplify(*, B', QuasiDiagonal(x.^2), B)

Debuggers help explain it, here's a partial walk through, where at the start A = ApplyQuasiArray(*, B', QuasiDiagonal(x.^2)) (which is created by B'QuasiDiagonal(x.^2)) and B is still B:

In *(A, B) at /Users/sheehanolver/.julia/packages/QuasiArrays/9YqBZ/src/matmul.jl:32
>32  *(A::AbstractQuasiArray, B::AbstractQuasiArray) = mul(A, B)

In mul(A, B) at /Users/sheehanolver/.julia/packages/ArrayLayouts/wTf1Q/src/mul.jl:106
>106  @inline mul(A, B) = materialize(Mul(A,B))

In materialize(M) at /Users/sheehanolver/.julia/packages/ArrayLayouts/wTf1Q/src/mul.jl:105
>105  materialize(M::Mul) = copy(instantiate(M))

In copy(M) at /Users/sheehanolver/.julia/packages/LazyArrays/DfNL4/src/linalg/mul.jl:302
>302  @inline copy(M::Mul{<:AbstractLazyLayout,<:AbstractLazyLayout}) = simplify(M)

In simplify(M) at /Users/sheehanolver/.julia/packages/LazyArrays/DfNL4/src/linalg/mul.jl:298
>298  simplify(M::Mul) = simplify(*, M.A, M.B)

In simplify(#unused#, args) at /Users/sheehanolver/.julia/packages/LazyArrays/DfNL4/src/linalg/mul.jl:288
>288  @inline simplify(::typeof(*), args...) = _simplify(*, _flatten(args...)...)

And the body of the function is then implemented as

_simplify(::typeof(*), Ac::AdjointBasisOrRestricted{<:AbstractFiniteDifferences}, D::QuasiDiagonal, B::BasisOrRestricted{<:AbstractFiniteDifferences}
jagot commented 4 years ago

Ok, so this works, but is this the preferred implementation pattern now?:

https://github.com/JuliaApproximation/CompactBases.jl/blob/6a4874fac07ffaed0cec9de108b64d4d15402ba2/src/fd_operators.jl#L1-L61

If so, I would need to generalize @simplify to arbitrary number of arguments.

Incidentally, I think that the current definition of QMul3 is wrong:

https://github.com/JuliaApproximation/ContinuumArrays.jl/blob/master/src/ContinuumArrays.jl#L72-L73

since Mul has five type parameters and two fields:

https://github.com/JuliaMatrices/ArrayLayouts.jl/blob/master/src/mul.jl#L1-L4

dlfivefifty commented 4 years ago

Hmm yes I think you are right that QMul3 is basically not used anywhere anymore....

Perhaps best to avoid @simplify for now; I would say it has a giant target on its head as I think macros are best avoided. It doesn't actually do much:

https://github.com/JuliaApproximation/ContinuumArrays.jl/blob/e2895539eb046c82e55f19feec43bb18726ae047/src/operators.jl#L50

So you can do the same thing with an arbitrary number of arguments with an overload of simplifiable(::typeof(*), ::Type1, ::Type2, ...) = Val(true) and _simplify(::typeof(*), ::Type1, ::Type2, ...).

Note the Val(true) is necessary so the lowering happens at compile time which makes it type-stable.

jagot commented 4 years ago

Gotcha. I think the least disruptive for me is to adapt the @materialize macro to generate simplifiable and _simplify methods, since it seems to work and the amount of boilerplate (args destructuring, axes checking) can be kept down.

jagot commented 4 years ago

There is a ridiculous amount of breakage: https://travis-ci.org/github/JuliaApproximation/CompactBases.jl/jobs/726268359#L12836

For instance, we now trigger some internal compiler error: https://travis-ci.org/github/JuliaApproximation/CompactBases.jl/jobs/726268359#L10684

One really bad regression with respect to restricted bases is the following:

julia> N = 10
10

julia> ρ = 1.0
1.0

julia> R = FiniteDifferences(N, ρ)
Finite differences basis {Float64} on 0.0..11.0 with 10 points spaced by Δx = 1.0

julia> r = axes(R, 1)
Inclusion(0.0..11.0)

julia> sel = 3:6
3:6

julia> R̃ = R[:, sel]
Finite differences basis {Float64} on 0.0..11.0 with 10 points spaced by Δx = 1.0, restricted to basis functions 3..6 ⊂ 1..10

julia> x = QuasiDiagonal(r)
QuasiDiagonal{Float64,Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}(Inclusion(0.0..11.0))

julia> R̃'*x*R̃
4×4 BandedMatrices.BandedMatrix{Float64,Array{Float64,2},Base.OneTo{Int64}}:
 3.0   ⋅    ⋅    ⋅
  ⋅   4.0   ⋅    ⋅
  ⋅    ⋅   5.0   ⋅
  ⋅    ⋅    ⋅   6.0

This should be a Diagonal matrix (since the restriction is the same on the left- and right-hand sides of the QuasiDiagonal). Indeed, I see that the Diagonal matrix is created, but for the unrestricted basis. Then, the restriction is applied by multiplying by restriction matrices from the left and the right. This is wasteful, since then matrix elements that will be subsequently thrown away are computed.

I assume this is the same (or related to) problem as I noted above, i.e. that the restrictions are dropped and vectors zero-padded.

This problem is exacerbated when using FE-DVR, since the derivative matrices are no longer BlockSkylineMatrices, but dense:

julia> N = 10
10

julia> t = range(0,stop=2,length=N)
0.0:0.2222222222222222:2.0

julia> R = FEDVR(t, 7)
FEDVR{Float64} basis with 9 elements on 0.0..2.0

julia> R̃ = R[:, 2:end-1]
FEDVR{Float64} basis with 9 elements on 0.0..2.0, restricted to elements 1:9, basis functions 2..54 ⊂ 1..55

julia> D = Derivative(axes(R̃, 1))
Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..2.0))

julia> R̃'*D*R̃
53×53 Array{Float64,2}:
   0.0       24.9049   -10.8404     6.92802   -5.42022    3.47715    0.0      …    0.0        0.0        0.0        0.0        0.0        0.0
 -24.9049     0.0       19.196     -9.59798    6.92802   -4.33262    0.0           0.0        0.0        0.0        0.0        0.0        0.0
  10.8404   -19.196      0.0       19.196    -10.8404     6.36396    0.0           0.0        0.0        0.0        0.0        0.0        0.0
  -6.92802    9.59798  -19.196      0.0       24.9049   -11.9814     0.0           0.0        0.0        0.0        0.0        0.0        0.0
   5.42022   -6.92802   10.8404   -24.9049     0.0       37.4844     0.0           0.0        0.0        0.0        0.0        0.0        0.0
  -3.47715    4.33262   -6.36396   11.9814   -37.4844     0.0       37.4844   …    0.0        0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0      -37.4844     0.0           0.0        0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0       11.9814   -24.9049        0.0        0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0       -6.36396   10.8404        0.0        0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0        4.33262   -6.92802       0.0        0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0       -3.47715    5.42022  …    0.0        0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0        2.25      -3.47715       0.0        0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0        0.0        0.0           0.0        0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0        0.0        0.0           0.0        0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0        0.0        0.0           0.0        0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0        0.0        0.0      …    0.0        0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0        0.0        0.0           0.0        0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0        0.0        0.0           0.0        0.0        0.0        0.0        0.0        0.0
   ⋮                                                      ⋮                   ⋱                                     ⋮
   0.0        0.0        0.0        0.0        0.0        0.0        0.0           0.0        0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0        0.0        0.0           0.0        0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0        0.0        0.0           0.0        0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0        0.0        0.0           0.0        0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0        0.0        0.0      …    0.0        0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0        0.0        0.0          -2.25       0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0        0.0        0.0           3.47715    0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0        0.0        0.0          -4.33262    0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0        0.0        0.0           6.36396    0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0        0.0        0.0      …  -11.9814     0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0        0.0        0.0          37.4844     0.0        0.0        0.0        0.0        0.0
   0.0        0.0        0.0        0.0        0.0        0.0        0.0           0.0       37.4844   -11.9814     6.36396   -4.33262    3.47715
   0.0        0.0        0.0        0.0        0.0        0.0        0.0         -37.4844     0.0       24.9049   -10.8404     6.92802   -5.42022
   0.0        0.0        0.0        0.0        0.0        0.0        0.0          11.9814   -24.9049     0.0       19.196     -9.59798    6.92802
   0.0        0.0        0.0        0.0        0.0        0.0        0.0      …   -6.36396   10.8404   -19.196      0.0       19.196    -10.8404
   0.0        0.0        0.0        0.0        0.0        0.0        0.0           4.33262   -6.92802    9.59798  -19.196      0.0       24.9049
   0.0        0.0        0.0        0.0        0.0        0.0        0.0          -3.47715    5.42022   -6.92802   10.8404   -24.9049     0.0
dlfivefifty commented 4 years ago

I'll go through and try to fix some of this.

Note It's impossible to tell at compile time that the restriction is the same on the left and right....

jagot commented 4 years ago

Sure, but it does not need to be known at compile time IMHO, since you don't create the matrices over and over again in a computation; you might update the matrix, but you allocate it once in the beginning of the calculation. For this reason I think it is fine if similar is type unstable, whereas copyto! is not.

What I want to certain of is that the restriction "stays" with the basis object, and does not split into basis+restriction matrix.

jagot commented 4 years ago

If it is helpful to you, I could get rid of the @materialize macro, i.e. spell it out in terms of similar, copyto!, simplifiable, and _simplify.

dlfivefifty commented 4 years ago

But then if it's not so important we can just stick to a banded matrix and make sure a fast-path is taken when that banded matrix is diagonal.... this seems more robust

I can manage with @materialize

dlfivefifty commented 4 years ago

Your other broken example is not working for me:

julia> R̃'*D*R̃
QuasiArrays.ApplyQuasiArray{Float64,2,typeof(*),Tuple{Adjoint{Int64,BandedMatrices.BandedMatrix{Int64,FillArrays.Ones{Int64,2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}},Base.OneTo{Int64}}},QuasiArrays.QuasiAdjoint{Float64,FEDVR{Float64,Float64,FillArrays.Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},FEDVR{Float64,Float64,FillArrays.Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},BandedMatrices.BandedMatrix{Int64,FillArrays.Ones{Int64,2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}},Base.OneTo{Int64}}}}(*, ([0 1 … 0 0; 0 0 … 0 0; … ; 0 0 … 0 0; 0 0 … 1 0], QuasiArrays.QuasiAdjoint{Float64,FEDVR{Float64,Float64,FillArrays.Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}}(FEDVR{Float64} basis with 9 elements on 0.0..2.0), Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}(Inclusion(0.0..2.0)), FEDVR{Float64} basis with 9 elements on 0.0..2.0, [0 0 … 0 0; 1 0 … 0 0; … ; 0 0 … 0 1; 0 0 … 0 0]))

But I suspect we need to support Banded * BlockBandedMatrix

jagot commented 4 years ago

Sorry, had forgotten to push the last commit. Try again?

jagot commented 4 years ago

But I suspect we need to support Banded * BlockBandedMatrix

Indeed, but I wanted to avoid that multiplication altogether.

dlfivefifty commented 4 years ago

I'm sure we can hack around this for now for these bases, say via a 3-argument simplify that checks

jagot commented 4 years ago

Fair enough, but I'd need up to 6-argument versions. I suppose the long-term plan would be to have some kind of trait that tells the simplification machinery not to expand SubQuasiArrays?

dlfivefifty commented 4 years ago

It only expands because the MemoryLayout tells it to behave like a ApplyLayout{typeof(*)} in a simplify: https://github.com/JuliaApproximation/ContinuumArrays.jl/blob/e2895539eb046c82e55f19feec43bb18726ae047/src/bases/bases.jl#L273

If you give it a different layout it can have a different behaviour, but then you lose something in the process...so will have to think about how to structure this.

dlfivefifty commented 4 years ago

I went through and did most the FD tests. Can you check you are happy with where it's going? I didn't see the point of the lazy inner products: there is now a LazyArrays.Dot so maybe you can use that? Or maybe other fixes I've done have made these unnecessary?

I probably deleted some tests and types you may want which we can restore.

jagot commented 4 years ago

I think like it's going in the right direction; in fact I never actually used the lazy inner products, and mostly I just precompute the mass matrix S and compute inner products as dot(a, S, b).

dlfivefifty commented 4 years ago

For blocked arrays we need to rethink how we drop the first/last entry: the convention is that if A is an (abstract) matrix then axes(A[kr,jr]) == (axes(kr,1),axes(jr,1)), that is, we expect indexing to inherit the block structure. That is, A[2:end-1,2:end-1] will lose all the block structure as axes(2:end-1) == (Base.OneTo(n),). This leads to the following behaviour:

julia>             ∇ = first_derivative(R, D)
7×7-blocked 13×13 BlockSkylineMatrix{Float64,Array{Float64,1},BlockBandedMatrices.BlockSkylineSizes{Tuple{BlockArrays.BlockedUnitRange{Array{Int64,1}},BlockArrays.BlockedUnitRange{Array{Int64,1}}},Array{Int64,1},Array{Int64,1},BandedMatrix{Int64,Array{Int64,2},Base.OneTo{Int64}},Array{Int64,1}}}:
 -1.2        0.723607  -0.276393  │   0.141421  │    ⋅          ⋅        │    ⋅        │    ⋅          ⋅        │    ⋅        │    ⋅          ⋅          ⋅      
 -0.723607   0.0        0.447214  │  -0.19544   │    ⋅          ⋅        │    ⋅        │    ⋅          ⋅        │    ⋅        │    ⋅          ⋅          ⋅      
  0.276393  -0.447214   0.0       │   0.511667  │    ⋅          ⋅        │    ⋅        │    ⋅          ⋅        │    ⋅        │    ⋅          ⋅          ⋅      
 ─────────────────────────────────┼─────────────┼────────────────────────┼─────────────┼────────────────────────┼─────────────┼─────────────────────────────────
 -0.141421   0.19544   -0.511667  │   0.0       │   0.511667  -0.19544   │   0.1       │    ⋅          ⋅        │    ⋅        │    ⋅          ⋅          ⋅      
 ─────────────────────────────────┼─────────────┼────────────────────────┼─────────────┼────────────────────────┼─────────────┼─────────────────────────────────
   ⋅          ⋅          ⋅        │  -0.511667  │   0.0        0.447214  │  -0.19544   │    ⋅          ⋅        │    ⋅        │    ⋅          ⋅          ⋅      
   ⋅          ⋅          ⋅        │   0.19544   │  -0.447214   0.0       │   0.511667  │    ⋅          ⋅        │    ⋅        │    ⋅          ⋅          ⋅      
 ─────────────────────────────────┼─────────────┼────────────────────────┼─────────────┼────────────────────────┼─────────────┼─────────────────────────────────
   ⋅          ⋅          ⋅        │  -0.1       │   0.19544   -0.511667  │   0.0       │   0.511667  -0.19544   │   0.1       │    ⋅          ⋅          ⋅      
 ─────────────────────────────────┼─────────────┼────────────────────────┼─────────────┼────────────────────────┼─────────────┼─────────────────────────────────
   ⋅          ⋅          ⋅        │    ⋅        │    ⋅          ⋅        │  -0.511667  │   0.0        0.447214  │  -0.19544   │    ⋅          ⋅          ⋅      
   ⋅          ⋅          ⋅        │    ⋅        │    ⋅          ⋅        │   0.19544   │  -0.447214   0.0       │   0.511667  │    ⋅          ⋅          ⋅      
 ─────────────────────────────────┼─────────────┼────────────────────────┼─────────────┼────────────────────────┼─────────────┼─────────────────────────────────
   ⋅          ⋅          ⋅        │    ⋅        │    ⋅          ⋅        │  -0.1       │   0.19544   -0.511667  │   0.0       │   0.511667  -0.19544    0.141421
 ─────────────────────────────────┼─────────────┼────────────────────────┼─────────────┼────────────────────────┼─────────────┼─────────────────────────────────
   ⋅          ⋅          ⋅        │    ⋅        │    ⋅          ⋅        │    ⋅        │    ⋅          ⋅        │  -0.511667  │   0.0        0.447214  -0.276393
   ⋅          ⋅          ⋅        │    ⋅        │    ⋅          ⋅        │    ⋅        │    ⋅          ⋅        │   0.19544   │  -0.447214   0.0        0.723607
   ⋅          ⋅          ⋅        │    ⋅        │    ⋅          ⋅        │    ⋅        │    ⋅          ⋅        │  -0.141421  │   0.276393  -0.723607   1.2     

julia> sel
2:12

julia> ∇[sel,sel]
11×11 Array{Float64,2}:
  0.0        0.447214  -0.19544    0.0        0.0        0.0        0.0        0.0        0.0        0.0        0.0
 -0.447214   0.0        0.511667   0.0        0.0        0.0        0.0        0.0        0.0        0.0        0.0
  0.19544   -0.511667   0.0        0.511667  -0.19544    0.1        0.0        0.0        0.0        0.0        0.0
  0.0        0.0       -0.511667   0.0        0.447214  -0.19544    0.0        0.0        0.0        0.0        0.0
  0.0        0.0        0.19544   -0.447214   0.0        0.511667   0.0        0.0        0.0        0.0        0.0
  0.0        0.0       -0.1        0.19544   -0.511667   0.0        0.511667  -0.19544    0.1        0.0        0.0
  0.0        0.0        0.0        0.0        0.0       -0.511667   0.0        0.447214  -0.19544    0.0        0.0
  0.0        0.0        0.0        0.0        0.0        0.19544   -0.447214   0.0        0.511667   0.0        0.0
  0.0        0.0        0.0        0.0        0.0       -0.1        0.19544   -0.511667   0.0        0.511667  -0.19544
  0.0        0.0        0.0        0.0        0.0        0.0        0.0        0.0       -0.511667   0.0        0.447214
  0.0        0.0        0.0        0.0        0.0        0.0        0.0        0.0        0.19544   -0.447214   0.0

which the simplify machinery successfully replicates.

No simple solution sticks out to fix this. Here is a quick hack that works:

julia> function dirichlet(kr)
           ls = copy(kr.lasts)
           ls[end] -= 1
           _BlockedUnitRange(2,ls)
       end
dirichlet (generic function with 1 method)

julia> dirichlet(axes(∇,1))
7-blocked 11-element BlockedUnitRange{Array{Int64,1}}:
  2
  3
 ──
  4
 ──
  5
  6
 ──
  7
 ──
  8
  9
 ──
 10
 ──
 11
 12
jagot commented 4 years ago

Right, but I don't think fixing the indexing of BlockSkylineMatrices is the way to solve the problem (even though it of course should be fixed too); I go to rather great lengths to make sure I can materialize an arbitrary subset of the full operator, for maximum efficiency. This is related to what I discuss here https://github.com/JuliaApproximation/ContinuumArrays.jl/issues/8 and here https://github.com/JuliaApproximation/ContinuumArrays.jl/issues/36. In the extreme end, I envision splitting the operator into chunks across different nodes of a cluster (only communicating bridging elements), and then you don't want each node to compute the full operator and then truncate afterwards.

       function dirichlet(kr)
           ls = copy(kr.lasts)
           ls[end] -= 1
           _BlockedUnitRange(2,ls)
       end

This will only work if you drop exactly one basis function on each side, and the first finite element has polynomial order > 2 (otherwise the first block is 1x1).

Incidentally, is there a short description of MemoryLayout et al., and what we lose if we use something else to keep SubQuasiArrays unexpanded?

dlfivefifty commented 4 years ago

I think dirichlet should actually be implemented as a type. And then you can avoid auto-expand easily. (It's also possible to turn it off just for FEDVR but it still doesn't solve the core problem of inconsistent axes)

jagot commented 4 years ago

I think instead of a single dirichlet type, we should think about the interface for arbitrary boundary conditions: https://github.com/JuliaApproximation/ContinuumArrays.jl/issues/20

But it is not only about boundary conditions, I need to work with arbitrary (albeit contiguous) restrictions.

jagot commented 3 years ago

I've been holding off doing anything about this for some time, since I've needed to work on my time propagator. Have you reached any conclusion what to do about the memory layouts?

dlfivefifty commented 3 years ago

I think making a Dirichlet type in this package would be the best solution for now.

jagot commented 3 years ago

That is not enough for me, I'm afraid. I don't only need to be able to drop the first/last elements, but an arbitrary (but contiguous) amount.

dlfivefifty commented 3 years ago

OK call it SubBlockedUnitRange

dlfivefifty commented 3 years ago

OK there's a big flaw in the design: the block structure of the bases should match that of the matrices. So this is wrong:

julia> order,sel,N = 4,2:12,5;

julia> t = range(0,stop=20,length=N);

julia> R = FEDVR(t, order);

julia> D = Derivative(axes(R,1));

julia> axes(R'*D*R,2)
7-blocked 13-element BlockArrays.BlockedUnitRange{Vector{Int64}}:
  1
  2
  3
 ──
  4
 ──
  5
  6
 ──
  7
 ──
  8
  9
 ──
 10
 ──
 11
 12
 13

julia> axes(R,2)
Base.OneTo(13)

I think a first step is to fix this. In particular, we want R'R to give a block Diagonal

julia> Diagonal(Ones((axes(∇,2),)))
13×13 Diagonal{Float64, Ones{Float64, 1, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}}}} with indices 1:1:13×1:1:13
jagot commented 3 years ago

I think a first step is to fix this. In particular, we want R'R to give a block Diagonal

Hmm, but the metric in FEDVR is the identity matrix (by construction), so ideally, I would want dot(u, R'R, v) to fall through to dot(u, v) (the same way we implemented a special case for BandedMatrix{<:FillArray}, if you remember. Will this possible with a BlockedUnitRange vector as underlying storage for Diagonal?

dlfivefifty commented 3 years ago

That's easy:

julia> Diagonal(Ones((blockedrange(1:5),))) isa Eye
true

julia> LinearAlgebra.dot(::AbstractVector, ::Eye, ::AbstractVector) = "call dot(A,B) here"

julia> dot(randn(5), Diagonal(Ones((blockedrange(1:5),))) , randn(5))
"call dot(A,B) here"

This fall back should be in FillArrays.jl

jagot commented 3 years ago

I figured I could use block_structure to compute axes(R, 2), if I remove the call to indices in block_orders:

https://github.com/JuliaApproximation/CompactBases.jl/blob/4d58f0c39cd9065b9553c187b9aae1571b4dfa70/src/fedvr.jl#L120-L156

axes(B::FEDVR) = (Inclusion(first(B.t)..last(B.t)), blockedrange(block_structure(B)))

Then we get something like this:

julia> R = FEDVR(range(0.0, stop=1.0, length=5), 4)
FEDVR{Float64} basis with 4 elements on 0.0..1.0

julia> axes(R,2)
7-blocked 13-element BlockedUnitRange{Vector{Int64}}:
  1
  2
  3
 ──
  4
 ──
  5
  6
 ──
  7
 ──
  8
  9
 ──
 10
 ──
 11
 12
 13

julia> D = Derivative(axes(R, 1))
Derivative{Float64, IntervalSets.ClosedInterval{Float64}}

julia> R'D*R
7×7-blocked 13×13 BlockSkylineMatrix{Float64, Vector{Float64}, BlockBandedMatrices.BlockSkylineSizes{Tuple{BlockedUnitRange{Vector{Int64}}, BlockedUnitRange{Vector{Int64}}}, Vector{Int64}, Vector{Int64}, BandedMatrix{Int64, Matrix{Int64}, Base.OneTo{Int64}}, Vector{Int64}}}:
 -24.0      14.4721    -5.52786  │    2.82843  │    ⋅          ⋅       │     ⋅       │    ⋅          ⋅       │     ⋅       │    ⋅          ⋅         ⋅
 -14.4721    0.0        8.94427  │   -3.90879  │    ⋅          ⋅       │     ⋅       │    ⋅          ⋅       │     ⋅       │    ⋅          ⋅         ⋅
   5.52786  -8.94427    0.0      │   10.2333   │    ⋅          ⋅       │     ⋅       │    ⋅          ⋅       │     ⋅       │    ⋅          ⋅         ⋅
 ────────────────────────────────┼─────────────┼───────────────────────┼─────────────┼───────────────────────┼─────────────┼───────────────────────────────
  -2.82843   3.90879  -10.2333   │    0.0      │  10.2333    -3.90879  │    2.0      │    ⋅          ⋅       │     ⋅       │    ⋅          ⋅         ⋅
 ────────────────────────────────┼─────────────┼───────────────────────┼─────────────┼───────────────────────┼─────────────┼───────────────────────────────
    ⋅         ⋅          ⋅       │  -10.2333   │   0.0        8.94427  │   -3.90879  │    ⋅          ⋅       │     ⋅       │    ⋅          ⋅         ⋅
    ⋅         ⋅          ⋅       │    3.90879  │  -8.94427    0.0      │   10.2333   │    ⋅          ⋅       │     ⋅       │    ⋅          ⋅         ⋅
 ────────────────────────────────┼─────────────┼───────────────────────┼─────────────┼───────────────────────┼─────────────┼───────────────────────────────
    ⋅         ⋅          ⋅       │   -2.0      │   3.90879  -10.2333   │    0.0      │  10.2333    -3.90879  │    2.0      │    ⋅          ⋅         ⋅
 ────────────────────────────────┼─────────────┼───────────────────────┼─────────────┼───────────────────────┼─────────────┼───────────────────────────────
    ⋅         ⋅          ⋅       │     ⋅       │    ⋅          ⋅       │  -10.2333   │   0.0        8.94427  │   -3.90879  │    ⋅          ⋅         ⋅
    ⋅         ⋅          ⋅       │     ⋅       │    ⋅          ⋅       │    3.90879  │  -8.94427    0.0      │   10.2333   │    ⋅          ⋅         ⋅
 ────────────────────────────────┼─────────────┼───────────────────────┼─────────────┼───────────────────────┼─────────────┼───────────────────────────────
    ⋅         ⋅          ⋅       │     ⋅       │    ⋅          ⋅       │   -2.0      │   3.90879  -10.2333   │    0.0      │  10.2333    -3.90879   2.82843
 ────────────────────────────────┼─────────────┼───────────────────────┼─────────────┼───────────────────────┼─────────────┼───────────────────────────────
    ⋅         ⋅          ⋅       │     ⋅       │    ⋅          ⋅       │     ⋅       │    ⋅          ⋅       │  -10.2333   │   0.0        8.94427  -5.52786
    ⋅         ⋅          ⋅       │     ⋅       │    ⋅          ⋅       │     ⋅       │    ⋅          ⋅       │    3.90879  │  -8.94427    0.0      14.4721
    ⋅         ⋅          ⋅       │     ⋅       │    ⋅          ⋅       │     ⋅       │    ⋅          ⋅       │   -2.82843  │   5.52786  -14.4721   24.0

There are some issues with this, however:

dlfivefifty commented 3 years ago

Right, this is the issue that we always want the block structure of view(A, kr, jr) to match that of kr and jr. So since 2:n-1 has only 1-block (this is the case for non-blocked arrays) the SubArray also has only one block.

There is a way around this but we need to decide on the syntax. Perhaps

view(A, Dirichlet())

or

view(A, :, InheritBlocks(2:end-1))
jagot commented 3 years ago

Ok, so how about something like this:

What do you think about this? Should this be implemented in CompactBases.jl or ContinuumArrays.jl?

dlfivefifty commented 3 years ago

Yes macro makes sense, maybe @inheritblocks view(A,2:end-1) should work. This would be in BlockArrays.jl

jagot commented 3 years ago

@mortenpi looked a bit at this PR, and found your commit afee6cbf80320d2107d0d7353b8f1f385cd07b63, which set the MemoryLayout for restricted FEDVR. With that, he could keep B in B'D'D*B from expanding to A*BandedMatrix on the right, but not on the left, since somehow it tried to materialize A'D'D*B into a matrix of the wrong size. Is there a difference between how MemoryLayout is working on QuasiArrays and their adjoints?

FWIW, he tried to add

ContinuumArrays.MemoryLayout(::Type{<:AdjointBasisOrRestricted{<:FEDVR}}) = ContinuumArrays.BasisLayout()

but that did not change things.

I feel that the issue of the blocked axes are slightly decoupled from how the memory layout is applied, i.e. if a SubQuasiArray is expanded into its parent and a BandedMatrix or not, isn't that so?

dlfivefifty commented 3 years ago

It should be AdjointBasisLayout I believe

I'd have to see the error to say anything more

mortenpi commented 3 years ago

The error is still there even with AdjointBasisLayout. Just to be explicit, I added the following methods:

ContinuumArrays.MemoryLayout(::Type{<:BasisOrRestricted{<:FEDVR}}) = ContinuumArrays.BasisLayout()
ContinuumArrays.MemoryLayout(::Type{<:AdjointBasisOrRestricted{<:FEDVR}}) = ContinuumArrays.AdjointBasisLayout()

and it throws

ERROR: DimensionMismatch("axes must be same")

which gets thrown here: https://github.com/JuliaApproximation/CompactBases.jl/blob/dl/newquasiarrays/src/materialize_dsl.jl#L22

I definitely see the following _simplify called in the stacktrace ([5])

_simplify(
  ::typeof(*),
  ::Adjoint{Int64,BandedMatrix{Int64,Ones{Int64,2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}},Base.OneTo{Int64}}},
  ::QuasiAdjoint{Float64,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}},
  ::QuasiAdjoint{Float64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}},
  ::Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},
  ::SubQuasiArray{Float64,2,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},UnitRange{Int64}},false}
) at /home/mortenpi/Projects/continuumarrays/dev/LazyArrays/src/linalg/mul.jl:294

which means that the BandedMatrix "simplification" is still applied on the left at some stage. Basically, it looks like it keeps the right side a SubQuasiArray, but transforms the left side. And then it tries to dispatch on the 4-argument * defined in CompactBases without the BandedMatrix.. and I guess it would then later multiply on the left with the BandedMatrix to re-apply the restriction.

In this case though, arguably, one mistake is perhaps in the signature of the 3- and 4-argument @simplify *s, where they are allowed to evaluate the case where left/right are Basis/Basis, SubQuasiArray{Basis}/SubQuasiArray{Basis} and Basis/SubQuasiArray{Basis} and SubQuasiArray{Basis}/Basis, even though the implementation can't actually handle the latter cases?

Full stacktrace:

ERROR: DimensionMismatch("axes must be same")
Stacktrace:
 [1] copyto!(::BlockSkylineMatrix{Float64,Array{Float64,1},BlockBandedMatrices.BlockSkylineSizes{Tuple{BlockArrays.BlockedUnitRange{Array{Int64,1}},BlockArrays.BlockedUnitRange{Array{Int64,1}}},Array{Int64,1},Array{Int64,1},BandedMatrix{Int64,Array{Int64,2},Base.OneTo{Int64}},Array{Int64,1}}}, ::ApplyQuasiArray{Float64,2,typeof(*),Tuple{QuasiAdjoint{Float64,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}},QuasiAdjoint{Float64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},SubQuasiArray{Float64,2,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},UnitRange{Int64}},false}}}) at /home/mortenpi/Projects/continuumarrays/CompactBases/src/materialize_dsl.jl:23
 [2] _simplify(::typeof(*), ::QuasiAdjoint{Float64,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}}, ::QuasiAdjoint{Float64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}, ::Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}, ::SubQuasiArray{Float64,2,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},UnitRange{Int64}},false}) at /home/mortenpi/Projects/continuumarrays/CompactBases/src/materialize_dsl.jl:120
 [3] _tail_simplify at /home/mortenpi/Projects/continuumarrays/dev/LazyArrays/src/linalg/mul.jl:297 [inlined]
 [4] _most_simplify at /home/mortenpi/Projects/continuumarrays/dev/LazyArrays/src/linalg/mul.jl:296 [inlined]
 [5] _simplify(::typeof(*), ::Adjoint{Int64,BandedMatrix{Int64,Ones{Int64,2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}},Base.OneTo{Int64}}}, ::QuasiAdjoint{Float64,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}}, ::QuasiAdjoint{Float64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}, ::Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}, ::SubQuasiArray{Float64,2,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},UnitRange{Int64}},false}) at /home/mortenpi/Projects/continuumarrays/dev/LazyArrays/src/linalg/mul.jl:294
 [6] simplify at /home/mortenpi/Projects/continuumarrays/dev/LazyArrays/src/linalg/mul.jl:290 [inlined]
 [7] simplify at /home/mortenpi/Projects/continuumarrays/dev/LazyArrays/src/linalg/mul.jl:300 [inlined]
 [8] copy(::Mul{LazyArrays.ApplyLayout{typeof(*)},ContinuumArrays.BasisLayout,ApplyQuasiArray{Float64,2,typeof(*),Tuple{Adjoint{Int64,BandedMatrix{Int64,Ones{Int64,2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}},Base.OneTo{Int64}}},QuasiAdjoint{Float64,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}},QuasiAdjoint{Float64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}},SubQuasiArray{Float64,2,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},UnitRange{Int64}},false}}) at /home/mortenpi/Projects/continuumarrays/dev/LazyArrays/src/linalg/mul.jl:304
 [9] materialize(::Mul{LazyArrays.ApplyLayout{typeof(*)},ContinuumArrays.BasisLayout,ApplyQuasiArray{Float64,2,typeof(*),Tuple{Adjoint{Int64,BandedMatrix{Int64,Ones{Int64,2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}},Base.OneTo{Int64}}},QuasiAdjoint{Float64,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}},QuasiAdjoint{Float64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}},SubQuasiArray{Float64,2,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},UnitRange{Int64}},false}}) at /home/mortenpi/.julia/packages/ArrayLayouts/7LZ91/src/mul.jl:105
 [10] mul at /home/mortenpi/.julia/packages/ArrayLayouts/7LZ91/src/mul.jl:106 [inlined]
 [11] * at /home/mortenpi/Projects/continuumarrays/dev/QuasiArrays/src/matmul.jl:23 [inlined]
 [12] afoldl(::typeof(*), ::ApplyQuasiArray{Float64,2,typeof(*),Tuple{Adjoint{Int64,BandedMatrix{Int64,Ones{Int64,2,Tuple{Base.OneTo{Int64},Base.OneTo{Int64}}},Base.OneTo{Int64}}},QuasiAdjoint{Float64,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}},QuasiAdjoint{Float64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}},Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}, ::SubQuasiArray{Float64,2,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},UnitRange{Int64}},false}) at ./operators.jl:525
 [13] * at ./operators.jl:538 [inlined]
 [14] call_for_debugging(::SubQuasiArray{Float64,2,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},UnitRange{Int64}},false}) at /home/mortenpi/Projects/continuumarrays/compactbases.jl:25
 [15] top-level scope at REPL[1]:1

And, for completeness, what I call is

R = FEDVR(range(0,stop=20,length=5), 4)
R̃ = R[:,2:12]
function call_for_debugging(r)
    D = Derivative(axes(r,1))
    Dc = D'
    rc = r'
    *(rc, Dc, D, r)
end
call_for_debugging(R̃)

Disclaimer: I am not at all qualified to hack on this code, so apologies for any silliness on my part. I figure I'd try to hack on this PR a bit to get at least some sense of how ContinuumArrays and the underlying packages work.

My longer-term aim is to provide a proper ContinuumArrays-compatible interface to a 1D FEM package that I work with. Stefanos helped me get started with that, but it's currently stuck a bit in a bit of a version hell, and would benefit from a new CompactBases release.

mortenpi commented 3 years ago

I think it's probably unrelated to this PR, but another thing I noticed is that trying to restrict a basis by indexing into it with a Vector of indices throws an error. And it looks like it comes from some underlying packages, rather than being caused by CompactBases per se.

I.e. calling this (on top of this PR without any custom modifications):

R = FEDVR(range(0,stop=20,length=5), 4)
function call_for_debugging(r)
    D = Derivative(axes(r,1))
    Dc = D'
    rc = r'
    *(rc, Dc, D, r)
end
call_for_debugging(R[:,[1,2,3,4,5]])

throws

ERROR: ArgumentError: new length must be ≥ 0
Stacktrace:
 [1] resize! at ./array.jl:1088 [inlined]
 [2] rehash!(::Dict{Float64,Nothing}, ::Int64) at ./dict.jl:183
 [3] sizehint! at ./dict.jl:242 [inlined]
 [4] sizehint! at ./set.jl:74 [inlined]
 [5] union!(::Set{Float64}, ::Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}}) at ./abstractset.jl:89
 [6] Set{Float64}(::Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}}) at ./set.jl:10
 [7] _Set(::Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}}, ::Base.HasEltype) at ./set.jl:23
 [8] Set(::Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}}) at ./set.jl:21
 [9] _shrink(::Function, ::Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}}, ::Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}) at ./array.jl:2562
 [10] intersect(::Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}}, ::Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}}) at ./array.jl:2566
 [11] _broadcast_getindex_evalf at ./broadcast.jl:648 [inlined]
 [12] _broadcast_getindex at ./broadcast.jl:621 [inlined]
 [13] (::Base.Broadcast.var"#19#20"{Base.Broadcast.Broadcasted{Base.Broadcast.Style{Tuple},Nothing,typeof(intersect),Tuple{Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}}},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}}}})(::Int64) at ./broadcast.jl:1046
 [14] ntuple at ./ntuple.jl:41 [inlined]
 [15] copy at ./broadcast.jl:1046 [inlined]
 [16] materialize at ./broadcast.jl:837 [inlined]
 [17] _mat_mul_arguments(::Tuple{Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}}, ::Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Array{Int64,1}}) at /home/mortenpi/Projects/continuumarrays/dev/LazyArrays/src/linalg/mul.jl:177
 [18] _mat_mul_arguments(::SubQuasiArray{Float64,2,ApplyQuasiArray{Float64,2,typeof(*),Tuple{Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}}},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Array{Int64,1}},false}) at /home/mortenpi/Projects/continuumarrays/dev/LazyArrays/src/linalg/mul.jl:198
 [19] arguments(::LazyArrays.ApplyLayout{typeof(*)}, ::SubQuasiArray{Float64,2,ApplyQuasiArray{Float64,2,typeof(*),Tuple{Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}}},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Array{Int64,1}},false}) at /home/mortenpi/Projects/continuumarrays/dev/QuasiArrays/src/lazyquasiarrays.jl:209
 [20] arguments(::SubQuasiArray{Float64,2,ApplyQuasiArray{Float64,2,typeof(*),Tuple{Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}}},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Array{Int64,1}},false}) at /home/mortenpi/Projects/continuumarrays/dev/QuasiArrays/src/lazyquasiarrays.jl:208
 [21] sub_materialize at /home/mortenpi/Projects/continuumarrays/dev/LazyArrays/src/linalg/mul.jl:204 [inlined]
 [22] sub_materialize at /home/mortenpi/Projects/continuumarrays/dev/QuasiArrays/src/subquasiarray.jl:328 [inlined]
 [23] layout_getindex at /home/mortenpi/.julia/packages/ArrayLayouts/7LZ91/src/ArrayLayouts.jl:108 [inlined]
 [24] _getindex(::Type{T} where T, ::IndexCartesian, ::ApplyQuasiArray{Float64,2,typeof(*),Tuple{Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}}}, ::Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Array{Int64,1}}) at /home/mortenpi/Projects/continuumarrays/dev/QuasiArrays/src/abstractquasiarray.jl:386
 [25] _getindex at /home/mortenpi/Projects/continuumarrays/dev/QuasiArrays/src/abstractquasiarray.jl:377 [inlined]
 [26] getindex(::ApplyQuasiArray{Float64,2,typeof(*),Tuple{Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}},FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}}}}, ::Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}}, ::Array{Int64,1}) at /home/mortenpi/Projects/continuumarrays/dev/QuasiArrays/src/abstractquasiarray.jl:372
 [27] macro expansion at /home/mortenpi/Projects/continuumarrays/dev/ContinuumArrays/src/bases/bases.jl:295 [inlined]
 [28] mul(::Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}, ::SubQuasiArray{Float64,2,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Array{Int64,1}},false}) at /home/mortenpi/Projects/continuumarrays/dev/ContinuumArrays/src/operators.jl:33
 [29] mul(::QuasiAdjoint{Float64,SubQuasiArray{Float64,2,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Array{Int64,1}},false}}, ::QuasiAdjoint{Float64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}) at /home/mortenpi/Projects/continuumarrays/dev/ContinuumArrays/src/operators.jl:41
 [30] *(::QuasiAdjoint{Float64,SubQuasiArray{Float64,2,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Array{Int64,1}},false}}, ::QuasiAdjoint{Float64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}) at /home/mortenpi/Projects/continuumarrays/dev/QuasiArrays/src/matmul.jl:23
 [31] *(::QuasiAdjoint{Float64,SubQuasiArray{Float64,2,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Array{Int64,1}},false}}, ::QuasiAdjoint{Float64,Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}}, ::Derivative{Float64,IntervalSets.Interval{:closed,:closed,Float64}}, ::SubQuasiArray{Float64,2,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Array{Int64,1}},false}) at ./operators.jl:538
 [32] call_for_debugging(::SubQuasiArray{Float64,2,FEDVR{Float64,Float64,Fill{Int64,1,Tuple{Base.OneTo{Int64}}}},Tuple{Inclusion{Float64,IntervalSets.Interval{:closed,:closed,Float64}},Array{Int64,1}},false}) at /home/mortenpi/Projects/continuumarrays/compactbases.jl:49
 [33] top-level scope at REPL[2]:1
dlfivefifty commented 3 years ago

I’ll look into this today

dlfivefifty commented 3 years ago

I'm looking at this right now. First, I think it's a mistake to leave D * R unmaterialized as we then cannot, for example, evaluate the derivative:

julia> (D * R)[0.1,1]
ERROR: MethodError: no method matching iterate(::IntervalSets.ClosedInterval{Float64})

I therefore think it would be better to have this return a type DerivativeFEDVR or whatever, which will simplify matters.

Though I'll see if I can get it working as-is