Closed MSeeker1340 closed 3 years ago
From Slack
Vector
will do well for a very long time (but yes, having a choice is always nice)AbstractDiffEqOperator
is in DiffEqBase with the definition of as_array
that might be all that's needed?is_linear(A)
which just uses the type parameter?*
I wouldn't use any A_mul_B!
. That lack of mutation would be useful for static arrays.Let's go forward with this.
Moved the notebook from my personal repo to my DiffEqOperators.jl fork.
Added another type of operator composition: block concatenation (see this section and simple test. The need for block concatenations arise for problems such as #59, as well as systems of PDEs (e.g. multivariate diffusion equations that @jlperla and I have been working on). Typically the combined operator can be expressed as a block diagonal matrix, which I have implemented in the notebook. General block concatenations using the [A B; C D]
notation (i.e. hvcat
) can also be written in a similar fashion, though probably a bit complicated.
Interesting. Yes, that's a good way to solve it. We might want to have as_array
make use of BlockBandedMatrices.jl here.
https://github.com/JuliaMatrices/BlockBandedMatrices.jl
That will make the as_array
kick out a pretty efficient result type when necessary. I think another thing @jlperla is interested in is multidimensional operators. To build the linear operator on vec(u)
in 2D space we need to utilize the Kronecker product. With the as_array
setup a user can just as_array
L1
and L2
and do the product there, but it may be useful to see if we can have a lazy Kronecker?
A lazy kronecker is a good idea. A frequent case would be to have the drift a function of the underlying state, but then the brownian term have the same variance. In that case, a lazy kronecker would be a nice way to compose the variance term.
Actually, wouldn't a lazy kronecker just be implemented by one of these block concentrations? That means it can be done via a new binary op ⊗
(\otimes
) that does this?
Going through old issues and found this: #43 Implement collect.
Seems the collect
referenced there is basically my as_array
. However is this the intended behavior of collect
in base Julia? From the latest documentation it seems collect
always return a full Array
:
https://docs.julialang.org/en/latest/base/collections/#Base.collect-Tuple{Any}
Not sure if the behavior is changed in v0.7 and documentation hasn't followed up. In v0.6.2 collect
does behave like full
:
>>> collect(speye(10))
10×10 Array{Float64,2}:
1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0 0.0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 1.0
collect
and full
are very similar, in fact I think full
may be deprecated in v0.7. Its definition is it collects over the iterator, so it would produce something dense. @dlfivefifty has been working on a more refined idea of specialized array types https://github.com/JuliaLang/julia/pull/25558 I'm not sure if that PR introduces something like as_array
for how it has to handle views.
@ChrisRackauckas Not sure you wanted me to pipe in, but I'm not sure what as_array
is from this conversation.
as_array
is a method that creates the "best" structured array from the lazy operator. So if it knows it's banded then it returns a BandedMatrix
, but if it doesn't have good structure but is sparse it returns SprarseMatrixCSC
. Does such a function exist?
Updates about lazy Kronecker products and sums here and test here.
I only wrote the two dimension case. It's possible to do an N-dimension case (it should look a lot like DiffEqCompositeOperator
) but the multi-indexing part is a bit messy (for example, how can I express something like "loop over the multidimensional array and multiply L
with each 1D slice in the j
th dimension"?)
Relevant discussion: #21 Taking operators on dimensional models seriously.
Still not sure if this is the right approach though. Also would the concept of tensor/Kronecker products be a bit hard to grasp for regular users?
Well the tensor/Kronecker products are how they'd be implemented, but we can probably find a better higher level interface to expose to users.
For sure. I think the syntactic sugar here is very important to get right (or at least have a plan). But if you can write the math as a cartesian product of the domains, then the kronecker symbol may be defensible.
@MSeeker1340 I am hoping that this library is the mindblowing combination of blackboard-math and generic but efficient code that we can use to show people the power of Julia, so I don't mind taking the time necessary to ensure the right high-level interface is possible. How much of that is in the https://github.com/JuliaDiffEq/ModelingToolkit.jl vs. this library, I am not entirely sure.
Chris, since many of the discrete state in economics would be named discrete states in a markov chain, it might be worth thinking about how this might interact with the definition of finite domains.
I'm not sure if that PR introduces something like as_array for how it has to handle views
No, that's just for dispatch of mul!
. But it would be useful for overriding mul!
for lazy arrays. (Without that PR requires many, many ambiguity cases to override mul!
.)
What about overriding AbstractSparseMatrix(A)
to construct the "best" sparse matrix?
Seems odd to me to give an abstract type a fake "constructor". Is that done anywhere else?
How is it fake? Yes it's standard practice:
julia> Integer(3.0)
3
How is it fake?
Well you cannot construct an abstract type, so in some sense it's not really a constructor. But okay, I didn't know that it's a standard thing (I never knew you could use Integer
as a function). AbstractSparseMatrix
makes sense then.
The current interface of
AbstractDiffEqOperator
is a bit messy and could use some rework. Some of the major problems are:The definitions are not centralized, with some of the type hierarchy defined in DiffEqBase instead of DiffEqOperators.
AbstractDiffEqOperator
is defined as a subtype ofAbstractLinearMap
from LinearMaps.jl. This is problematic because the combination and composition of operators will be of typeLinearCombination
andCompositeMap
, which goes beyond theAbstractDiffEqOperator
type hierarchy. Also the implementation in LinearMaps.jl for operator combinations and compositions are not optimal for out standard (e.g. multiplication requires allocation of additional memory).Time-dependent coefficients. Currently they are implemented as
DiffEqScalar
and included within array operators. Presumably we need to do the same for all operators, which can be cumbersome.Below is my draft for a new
DiffEqOperator
interface that gets rid of the LinearMaps dependence and build up the type hierarchy from scratch. The overall structure is similar to LinearMaps while at the same time tailored to the needs of JuliaDiffEq. Feedbacks are welcomed :PNew DiffEqArrayOperator Interface Draft