Closed dlfivefifty closed 3 years ago
@jagot
Merging #35 (58b40da) into master (c009341) will increase coverage by
2.07%
. The diff coverage is92.98%
.
@@ Coverage Diff @@
## master #35 +/- ##
==========================================
+ Coverage 42.54% 44.62% +2.07%
==========================================
Files 17 19 +2
Lines 1241 1293 +52
==========================================
+ Hits 528 577 +49
- Misses 713 716 +3
Impacted Files | Coverage Δ | |
---|---|---|
src/QuasiArrays.jl | 70.00% <ø> (ø) |
|
src/quasireshapedarray.jl | 50.00% <ø> (ø) |
|
src/indices.jl | 44.27% <80.00%> (+0.42%) |
:arrow_up: |
src/quasipermutedims.jl | 90.00% <90.00%> (ø) |
|
src/quasikron.jl | 91.66% <91.66%> (ø) |
|
src/abstractquasiarray.jl | 32.48% <100.00%> (ø) |
|
src/quasiconcat.jl | 100.00% <100.00%> (ø) |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update c009341...58b40da. Read the comment docs.
I also added UnionVcat
which will be used as an analogue of Vcat
, but where the axis is the union of the two inputs. This is useful for combining two bases defined on different domains.
This adds support for Kronecker products.
I am a little unsure what this means precisely; is Kronecker (here) the same as direct product basis? E.g. two possible parameterizations of a 2d grid could be X⊗Y
, where X
and Y
are 1d Cartesian grids, or R⊗Φ
where R
could be e.g. Bessel functions and Φ[:,j] = exp(im*j*φ)
.
I also added
UnionVcat
which will be used as an analogue ofVcat
, but where the axis is the union of the two inputs. This is useful for combining two bases defined on different domains.
This is also nice; I assume that the two bases need to be disjoint and adjacent? I also have the case where the axes overlap each other slightly, i.e. you solve the problem twice and "exchange" information between the domains each time step.
am a little unsure what this means precisely
This is the proposed behaviour we would want:
T = ChebyshevT()
U = ChebyshevU()
K = QuasiKron(T,U)
K[(0.1,0.2),(3,4)] == T[0.1,3] * U[0.2,4]
But it also does kroncker products of vectors:
u = T * [randn(10); zeros(∞)]
v = U * [randn(10); zeros(∞)]
k = QuasiKron(u,v)
k[(0.1,0.2)] == u[0.1] * v[0.2]
There's an argument that QuasiCartesianIndex
would play the role of tuple. We could even have it auto-tensor, so that above k[0.1,0.2] ==
k[QuasiCartesianIndex(0.1,0.2)] == u[0.1] v[0.1]and
K[QuasiCartesianIndex(0.1,0.2),CartesianIndex(3,4)] == K[0.1,0.2,3,4] == T[0.1,3] U[0.2,4]`. There is a precedent from arrays:
julia> A[CartesianIndex(1,2),CartesianIndex(2)]
-0.8574531513858503
The challenge though would be that in that case K isa AbstractQuasiArray{T,4}
, which raises many questions on how to make *
well-defined.
Something else we need to consider is the array-valued case. For quasi vectors I think there's nothing to be done: we can just use BroadcastQuasiArray
:
julia> a = QuasiVector(randn(5), 0:0.5:2);
julia> BroadcastQuasiArray(vcat, a, a)[0.5]
2-element Array{Float64,1}:
1.622839557085236
1.622839557085236
julia> BroadcastQuasiArray(hvcat, Ref((2,2)), a, a, a, a)[0.5]
2×2 Array{Float64,2}:
1.62284 1.62284
1.62284 1.62284
But what about bases? That is, we want to do something like
L = LinearSpline(0:0.5:2)
A = ArrayBasis(L, 2)
@test eltype(A) == Vector{Float64}
axes(A) == (axes(L,1), Base.OneTo(2size(L,2))) # Probably we actually want to use blocked range for axes(A,2)
@test A[0.5,3] == [L[0.5,3],0] # probably want A[0.5,Block(1)[3]] == [L[0.5,3],0]
@test A[0.5,7] == [0,L[0.5,2]] # probably want A[0.5,Block(2)[2]] == [0,L[0.5,2]]
x = randn(5); y = randn(5);
@test (A * [x; y])[0.5] == A[0.5,:] * [x; y] == [L[0.5,:]*x; L[0.5,:]*y]
I'm trying to think whether this version of concatenation has a simple expression for classical arrays... probably not unfortunately.
Note for ∞-quasi arrays we also need the case where the coefficients are interlaced, that is
T = ChebyshevT()
U = ChebyshevU()
V = InterlacedVectorBasis(T,U)
axes(V) == (axes(T,1), blockedrange(Fill(2,∞)))
@test V[0.5,Block(5)[1]] == [T[0.5,5],0]
@test V[0.5,Block(5)[2]] == [0,U[0.5,5]]
L = LinearSpline(0:0.5:2) A = ArrayBasis(L, 2) axes(A) == (axes(L,1), Base.OneTo(2size(L,2)))
I assume this means the second component (I think this is the same as Block
?) is stored in a contiguous block after the first? I agree that this makes more sense from a computational point-of-view, but I am not sure if I like the syntax A[0.5,Block(1)[3]]
. Is A[0.5,(1,3)]
too magical? What does the zero in [L[0.5,3],0]
mean?
Is it possible to use two different bases for the different components? If I read correctly your ArrayBasis
uses the same basis for both components, and InterlacedVectorBasis
supports different bases, but store the coefficients like [t_1, u_1, t_2, u_2, ...]
. I would like a combination, [t_1, t_2, ..., u_1, u_2, ...]
.
We need to use the block syntax: the coefficients should be <: AbstractVector
so that operators are <: AbstractMatrix
, that is, we want to preserve the feature that we are turning continuous operators into (structured) matrices that can be solved without knowing anything about bases.
What does the zero in [L[0.5,3],0] mean?
A vector with two entries, the second one is 0....
This adds support for Kronecker products. Unlike standard arrays we also Kronecker the axes. This uses tuples for now, we may want to also support
SVector
.We may want an alternative approach for kroneckers of bases. The question is it better the wrap a matrix of coefficients in a quasi vector or a block vector. That is, do we want something like:
In my usual use case of tensor product of OPs, I actually want something else entirely:
QuasiKronTrav
, where the "matrix" of coefficients is traversed along diagonals. Operators then would get represented viaKronTrav
from LazyBandedMatrices.jl