SciML / PDERoadmap

A repository for the discussion of PDE tooling for scientific machine learning (SciML) and physics-informed machine learning
https://tutorials.sciml.ai/
18 stars 6 forks source link

Design irregular grids with auto expanding lazy operators #15

Closed jlperla closed 6 years ago

jlperla commented 6 years ago

See #13 for more. The question is how these sorts of interfaces would look for operators that automatically expand to the necessary extension when the domain itself is irregular.

For the design of the implementation, the main issue is caching the fornberg results. In particular, see https://github.com/JuliaDiffEq/PDERoadmap/issues/13#issuecomment-386197584

ChrisRackauckas commented 6 years ago

This is the difficult issue implementation-wise. It basically comes down to this.

L*Q is always what we want. Sometimes Q gets larger due to operators which jump beyond the boundary. Not all operators are defined so far beyond the boundary. However, given the domain of influence in L, it's equivalent to just add columns of zeros to make all of the operators the same same. So algebraically there's no problem here: add columns and you're done (which is a good implementation of AbstractMatrix on the composed operator)

However, the implementation of the lazy operator is more interesting. If L is only naturally defined on the closure of the domain, or is an array defined on the interior, or has coefficients which only exist on the interior, then making that extension lazily is difficult. Essentially what we have is that the central difference operator is defined as:

for i in 2:length(u)-1
  du[i-1] = mu[i] * (u[i-1]-2u[i]+u[i+1])/dx[i]
end

Now somehow we have to get back to there. One way to do this is by restriction. Q*u builds a big vector, and then we can downsize to the right u and multiply. That's probably the simplest, but requires knowing to put an operator in there and requires a buffer vector.

There are some smarter ways. Q could return a tagged vector of sorts with extra fields that detail the interior. Multiplication by the tagged vector could do one of two things. It can either change internal indexing, which would be rough to implement. Or, the easy way is it can lazy restrict by using a view and then operate internally with the view, with the output tagging simply there to have a dispatch know to use the view. Still, this would complicate the interface quite a bit. Instead, the composed operator could be in theory smart enough to do this view which might be easier.

It's just nitty gritty implementation details though since it's all equivalent ways to do the same computation.

jlperla commented 6 years ago

Yes. With the regular grids, the implementation has some tricky parts, but I think this stuff all makes sense. Assuming we go with something along the lines of the abstract step in #4 (i.e. typeof(grid) <: Range, supporting step(grid) and length(grid) operations) then it is relatively easy to see how the extension would occur.

Whatever the design here, I think it is also worth thinking through how the irregular grids (i.e., supporting any AbstractArray or something like that) are extended. Do we assume that the extensions have to be in exactly the same size as the closest grid spacing at the boundary? I think this is a reasonable assumption, but just want to bring it up as a design question.

The design of this may (or may not) influence the general approach discussed above. In some ways, if the operators are just writing to offsets in a buffer, then the exact location of the grid may not matter (except for the operators without offsets)