SciML / PDERoadmap

A repository for the discussion of PDE tooling for scientific machine learning (SciML) and physics-informed machine learning
https://tutorials.sciml.ai/
18 stars 6 forks source link

Discussion on design of domain and grid interface #4

Open jlperla opened 6 years ago

jlperla commented 6 years ago

This issue isolates and continues discussions in https://github.com/JuliaDiffEq/DifferentialEquations.jl/issues/260 with @dlfivefifty and @ChrisRackauckas on a joint design of the interface for domains, etc. between the ApproxFun, DiffEqOperators, and ModelingToolkit libraries. We hope to implement it in the prototype: https://github.com/JuliaDiffEq/PDERoadmap/pull/1

Also see https://github.com/JuliaDiffEq/ModelingToolkit.jl/blob/master/test/domains.jl

For a starting point in one dimension,

d = Interval(1.0, 10.0) #Continuous domain
x = RegularGrid(d, N)

Note: I think it is better to give points than dx, but I am open. The reason is that if you give a dx then you never know how well things line up, or how many points it ends up creating.

When things move to an irregular grid,

d = Interval(1.0, 10.0) #Continuous domain
grid_points = 1.0:0.01:10.0 #This is regular, but you get the point.
x = IrregularGrid(d, grid_points)

For the product of a discrete domain with 2 values and continuous domain (which comes up a lot in economics applications),

d = DiscreteDomain(2) * Interval(0.0,5.0) #Or tensor product?

Or, when we later want to enable naming of types for the discrete domain (e.g. for a continuous-time markov chain of switching between L and H productivity in a model)

d1 = Interval(1.0, 2.0)
d2 = DiscreteDomain([:L, :H])
d = d1 ⊗ d2 #Is the tensor product the right one to use here?

Or, instead, does it make sense to use an enumeration? I really don't like how scoping works with enumerations in Julia, which limits the clarity of the code.

ChrisRackauckas commented 6 years ago

Though maybe our wires are crossed. The really important thing is that SplitODEProblem supports lazy structured matrices (that may depend on time) that are realised as a distributed structured matrix remotedly: that is, the workers have access to the lazy matrix and know how to populate their blocks.

Yes it does. You just update_coefficients! from the operator interface, and all of the solvers should be properly making use of it.

jlperla commented 6 years ago

I don't understand the Q * L part.

The Q stuff comes from the https://github.com/JuliaDiffEq/PDERoadmap/releases/download/v0.2/linear_operators_overview.pdf which in turn was formalizing the algebra for our discussion in https://github.com/JuliaDiffEq/DifferentialEquations.jl/issues/260

You can see the examples there.

But I also think that I did something confusing here. You cannot create the Q directly from the B as it requires the L. So perhaps something like

B = [( I → -1);  ( I →   1)];
L = 𝒟^2
Q = boundaryextrapolation(B, L)
heat = L * Q 
prob = ODEProblem(heat, u_0, tspan)

Probably better is if the function to just extrapolate to the boundary and create the internal L * Q is also defensible. That could be done by overloading the vcat

M = 100
x = linspace(0.0, 1.0, M)
B = [( I → -1);  ( I →   1)];
L = 𝒟(x)^2
L_q = [L; B] #Interally creates a `L*Q` setup, creating the correct Q from L and B
prob = ODEProblem(L_q, u_0, tspan)

This relies on the update_coefficients! for the internal linear operators.

Or, better yet, since L_q : R^M -> R^M and is linear, we can use an exponential integrator. Or did I miss something here?

dlfivefifty commented 6 years ago

I think [L; B] is a bad idea: I used to do something similar for eigs in ApproxFun, and its too much of an abuse of notation. Also, how do you include the forcing terms? (for problems like B*u = c and u_t = L*u + f)

Has it come up that eigenvalue problems have the exact same issue (and solution) of incorporating the boundary conditions into the "mass" matrix of a generalised eigenvalue problem? Starting with the syntax for eigs (and the implemention) might not be a bad idea as its simpler than time-stepping: no nonlinear terms. I would say for eigs the following syntax seems reasonable:

eigs(L; constraints=(B,c))

EDIT: for eigenvalue problems the constraints should always be zero as you want linearity, so I think it would be:

eigs(L; constraints=B)
jlperla commented 6 years ago

Yeah, you are right, that doesn't work. What I do know is that:

Lets take this discussion to a new issue to focus on the interface, as opposed to the discussion of the domain here.

ChrisRackauckas commented 6 years ago

We don't know the general approach to calculating the Q from a B and an L.

Yes we do. It's just writing down a choice of interpolating polynomial on the full space which satisfies the BCs.

It is not really useful to have the Q on its own, as we only use the L Q product for calculations. However, in most cases that interior of the Q is an identity function, so the L Q product can probably be done in a smart and lazy way.

We can build it into the pre-made operators if we want to, and that would make them square. That's a lot like the current DiffEqOperators.jl design, but with a few fixes to some portions by standardizing on only doing the interior.

jlperla commented 6 years ago

Yes we do. It's just writing down a choice of interpolating polynomial on the full space which satisfies the BCs.

Let me rephrase: at the point of us writing this up, and discussing with you, we did not have an algorithmic way to take a B, L, and spit out a Q. Though we have ways to check if a particular Q is correct. If you have an algorithm to create the Q then write it down so that @MSeeker1340 can consider implementing it.

But considering how tricky the generic machinery for this could be (i.e. we don't even know the size of the extension space until we compose all operators) I am not 100% sure it is necessary yet. Adding to that the hope to make the Q part of the L * Q lazy, and it seems like we shouldn't go too general too quickly.

We can build it into the pre-made operators if we want to, and that would make them square. That's a lot like the current DiffEqOperators.jl design, but with a few fixes to some portions by standardizing on only doing the interior.

For me, the more that this stuff simply does a more complete job of implementing DiffEqOperators.jl with operator composition, the happier I get. While I like the idea of making the interface look nice and DSL like, I don't like the idea of radically increasing the complexity of what this is intended to do.

dlfivefifty commented 6 years ago

Here's code that spits out Q for any B and L, using just linear algebra (no interpolating polynomials):

using LinearAlgebra

n = 10
h = 1/n
L = zeros(n-2,n); L[diagind(L,0)] .= 1/h^2; L[diagind(L,2)] .= 1/h^2; L[diagind(L,1)] .= -2/h^2;

function projection(B, L)
    n = size(L,2)
    @assert size(B,2) == n
    @assert size(B,1) == 2 # can be generalised to arbitrary number of BCs

    j1 = 0
    for j=1:size(B,2)
        if norm(B[:,j]) ≠ 0
            j1 = j
            break
        end
    end
    j2 = 0
    for j=j1+1:size(B,2)
        if rank(B[:,[j1,j]]) == 2
            j2 = j
            break
        end
    end

    B = inv(B[:,[j1,j2]])*B

    # eliminate j1 & j2
    P = Matrix(I,n,n)[:,[j1,j2]]
    P̃ = Matrix(I,n,n)[:, setdiff(1:n, [j1,j2])]

    (I-P*B)*P̃
end

# dirichlet
B = zeros(2,n); B[1,1] = B[2,end] = 1
Q = projection(B,L)
L*Q
# 8×8 Array{Float64,2}:
#  -200.0   100.0     0.0     0.0     0.0     0.0     0.0     0.0
#   100.0  -200.0   100.0     0.0     0.0     0.0     0.0     0.0
#     0.0   100.0  -200.0   100.0     0.0     0.0     0.0     0.0
#     0.0     0.0   100.0  -200.0   100.0     0.0     0.0     0.0
#     0.0     0.0     0.0   100.0  -200.0   100.0     0.0     0.0
#     0.0     0.0     0.0     0.0   100.0  -200.0   100.0     0.0
#     0.0     0.0     0.0     0.0     0.0   100.0  -200.0   100.0
#     0.0     0.0     0.0     0.0     0.0     0.0   100.0  -200.0
#

# neumann
B = zeros(2,n); B[1,1] = -1; B[1,2] = 1; B[2, end-1] = -1; B[2,end] = 1
Q = projection(B,L)
L*Q
# 8×8 Array{Float64,2}:
#  -100.0   100.0     0.0     0.0     0.0     0.0     0.0     0.0
#   100.0  -200.0   100.0     0.0     0.0     0.0     0.0     0.0
#     0.0   100.0  -200.0   100.0     0.0     0.0     0.0     0.0
#     0.0     0.0   100.0  -200.0   100.0     0.0     0.0     0.0
#     0.0     0.0     0.0   100.0  -200.0   100.0     0.0     0.0
#     0.0     0.0     0.0     0.0   100.0  -200.0   100.0     0.0
#     0.0     0.0     0.0     0.0     0.0   100.0  -200.0   100.0
#     0.0     0.0     0.0     0.0     0.0     0.0   100.0  -100.0

# IVP
B = zeros(2,n); B[1,1] = B[2,2] = 1
Q = projection(B,L)
L*Q

# 8×8 Array{Float64,2}:
#   100.0     0.0     0.0     0.0     0.0     0.0     0.0    0.0
#  -200.0   100.0     0.0     0.0     0.0     0.0     0.0    0.0
#   100.0  -200.0   100.0     0.0     0.0     0.0     0.0    0.0
#     0.0   100.0  -200.0   100.0     0.0     0.0     0.0    0.0
#     0.0     0.0   100.0  -200.0   100.0     0.0     0.0    0.0
#     0.0     0.0     0.0   100.0  -200.0   100.0     0.0    0.0
#     0.0     0.0     0.0     0.0   100.0  -200.0   100.0    0.0
#     0.0     0.0     0.0     0.0     0.0   100.0  -200.0  100.0
jlperla commented 6 years ago

Great! Can you write a version of this that works for a general setup with more than one extension node in each direction? That is, let there me a total of M_E_m on the bottom, M_E_p on the top so that M_bar = M + M_E_m + M_E_p where the B has M_bar columns, etc.

dlfivefifty commented 6 years ago

I don't know what you mean "top" and "bottom". Do you mean 3 boundary conditions, 2 at the left and 1 at the right? Here it is:


function projection(B, L)
    n = size(L,2)
    b = size(B,1)
    @assert size(B,2) == n
    @assert size(L,1) == n - b

    js = Vector{Int}()
    for ξ = 1:b
        j1 = 0
        for j=1:size(B,2)
            if rank(B[:,vcat(j,js...)]) == ξ
                j1 = j
                break
            end
        end
        push!(js,j1)
    end
    @show js
    B = inv(B[:,js])*B

    # eliminate j1 & j2
    P = Matrix(I,n,n)[:,js]
    P̃ = Matrix(I,n,n)[:, setdiff(1:n, js)]

    (I-P*B)*P̃
end

# linear kdv u_t = u_xxx
n = 10
h = 1/n
L = zeros(n-2,n); L[diagind(L,0)] .= 1/h^2; L[diagind(L,2)] .= 1/h^2; L[diagind(L,1)] .= -2/h^2;
D = zeros(n-3,n-2); D[diagind(D,0)] .= -1/h; D[diagind(D,1)] .= 1/h;
L = D*L

# IVP + Right neumann
B = zeros(3,n); B[1,1] = 1; B[2,1] = -1; B[2,2] = 1; B[3,end-1] = -1; B[3,end] = 1
Q = projection(B,L)
L*Q
# 7×7 Array{Float64,2}:
#  -3000.0   1000.0      0.0  …      0.0      0.0      0.0
#   3000.0  -3000.0   1000.0         0.0      0.0      0.0
#  -1000.0   3000.0  -3000.0         0.0      0.0      0.0
#      0.0  -1000.0   3000.0      1000.0      0.0      0.0
#      0.0      0.0  -1000.0     -3000.0   1000.0      0.0
#      0.0      0.0      0.0  …   3000.0  -3000.0   1000.0
#      0.0      0.0      0.0     -1000.0   3000.0  -2000.0
dlfivefifty commented 6 years ago

Though I'm really confused why we needed Q in the first place... What's wrong with using the fact that

B u = c    <=> B_t u + B*u_t = 0

where B_t is the derivative of B w.r.t. t (which is just zero for Dirichlet/Neumann) to write the ODE as

[B; R]*u_t = [B_t; L]*u

Then to get a regular ODE we have:

u_t = inv([B; R])*[B_t; L]*u

This simple approach seems to also recover the standard Dirichlet/Neumann matrices.

Thanks @wormell for this observation.

jlperla commented 6 years ago

Do you mean 3 boundary conditions, 2 at the left and 1 at the right?

Yeah, I guess I am having a little trouble separating the count of what we might call the equations for the boundary conditions from the number of discretized equations that it generates. The key to me is that with jumps the size of the extended space can get bigger than just adding two points when discretized. Typically you would write the boundary conditions as a single equation (pre discretization)... For example a reflection of the jump... But I think that adds in multiple boundary condition equations when discretized

dlfivefifty commented 6 years ago

Can you write down the differential equation with boundary conditions you had in mind?

ChrisRackauckas commented 6 years ago

Though I'm really confused why we needed Q in the first place... What's wrong with using the fact that

That's a mathematical solution, but not a computational one. inv([B; R]) is not a nice operation, while the matrix that it expands to is simple and easy to just directly write down via an alternative derivation by just extrapolating from the interior to the boundary with the right order. It's mathematically equivalent but it's the difference between requiring large dense operators or a matrix-free lazy operator.

dlfivefifty commented 6 years ago

OK, in this case we can write

[B[1,:]; R; B[2,:]] = I + U*V'

where U is sparse and use the Woodbury formula

inv([B[1,:]; R; B[2,:]]) = I - U*inv(I + V'*U)*V'

and this will still be sparse. But I think it's just writing the same thing in different ways.

wormell commented 6 years ago

It may not parallelise well but one could also view the problem as

u_t = [B; R] \ ([B_t; L]*u)

which is a sparse problem.

ChrisRackauckas commented 6 years ago

Sparse is still much much larger than O(1).

ChrisRackauckas commented 6 years ago

Two things to mention. One is that it's fine to use mass matrices if one is using an implicit or Rosenbrock method for solving the timestepping problem for the PDE. That's a good traditional solution. However, the issue is not all PDE solvers are implicit or semi-implicit.

But secondly, all of this operator formalism goes all the way back to where we started. Essentially what it says is that A = inv([B; R])*[B_t; L] is the square linear operator that we are looking for s.t. u = A*u is an equation on the interior which incorporates the boundary conditions. This is the same thing as A = L*Q. Either way, you have a way of defining your square operator directly. And, if the boundary conditions are affine, this can be done separately/independently on the two+ operators.

Thus we have finally come back around to showing that, as long as your BCs are affine (Robin + more, i.e. that they discretize to be satisfied by a linear combination of the interior), it's fine to develop a lazy finite difference, upwinding, and jumping operators separately and define their square form as their composition with the inversion of the BCs (or as the composition with the extrapolation to the boundary and beyond). With our operator formalism it's now pretty quick to prove that. Since that is a superset of @jlperla 's needs here, that seems to be a good direction to go with for DiffEqOperators.jl: a library for automatic generation of lazy finite difference stencils and lazy PDE operators on the interior points with affine BCs. Anything more needs extra consideration, but this covers a lot of use cases.

dlfivefifty commented 6 years ago

I think I understand the issue now: any differential operator can be thought of as either left or right derivatives. This doesn't actually change the definition of D, but it does change the definition of R, so let me . Here's some examples:

n = 10;
h = 1/n
# is either left or right differential operator
D = zeros(n-1,n); D[diagind(D,0)] .= -1/h; D[diagind(D,1)] .= 1/h;
# R is restriction for right differential operator, that is, from x_0,…,x_n to x_1,…,x_n
R = function(n)
    R = zeros(n-1,n); R[diagind(R,1)] .=1 ; R
end
# L is restriction for left differential operator, that is, from x_0,…,x_n to x_0,…,x_{n-1}
L = function(n)
    L = zeros(n-1,n); L[diagind(L,0)] .=1 ; L
end

# we need to choose R/L so that B is zero in the corresponding columns
# we can find the correct columns automatically
function nonsingularcolumns(B)
    js = Vector{Int}()
    b = size(B,1)
    for ξ = 1:b
        j1 = 0
        for j=1:size(B,2)
            if rank(B[:,vcat(j,js...)]) == ξ
                j1 = j
                break
            end
        end
        push!(js,j1)
    end
    js
end

# A restriction is a combination of Rs and Ls
function restriction(num_R, num_L, n)
    C = Matrix(I,n,n)
    for k = 1:num_R
        C = R(n-k+1)*C
    end
    for k = 1:num_L
        C = L(n-num_R-k+1)*C
    end
    C
end

# to determine number of R's and L's, we need to see how many "left" and how many "right"
# boundary conditions there are
function restriction(B)
    n = size(B,2)
    js = nonsingularcolumns(B)

    num_R = sum(js .< n/2)
    num_L = sum(js .> n/2)
    restriction(num_R, num_L, n)
end

# Dirichlet
B = zeros(2,n); B[1,1] = B[end,end] = 1
restriction(B)
# 8×10 Array{Float64,2}:
#  0.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
#  0.0  0.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
#  0.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0
#  0.0  0.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0  0.0
#  0.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0
#  0.0  0.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0  0.0
#  0.0  0.0  0.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0
#  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  1.0  0.0

# IVP
B = zeros(2,n); B[1,1] = 1; B[2,1] = -1; B[2,2] = 1;
restriction(B)
# 8×10 Array{Float64,2}:
#  0.0  0.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
#  0.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0
#  0.0  0.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0  0.0
#  0.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0
#  0.0  0.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0  0.0
#  0.0  0.0  0.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0
#  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  1.0  0.0
#  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  1.0

# KDV IVP + Right BC

B = zeros(3,n); B[1,1] = 1; B[2,1] = -1; B[2,2] = 1; B[3,end] = 1;
restriction(B)

# 7×10 Array{Float64,2}:
#  0.0  0.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0
#  0.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0  0.0  0.0
#  0.0  0.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0  0.0
#  0.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0  0.0  0.0
#  0.0  0.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0  0.0
#  0.0  0.0  0.0  0.0  0.0  0.0  0.0  1.0  0.0  0.0
#  0.0  0.0  0.0  0.0  0.0  0.0  0.0  0.0  1.0  0.0
jlperla commented 6 years ago

Can you write down the differential equation with boundary conditions you had in mind?

Sorry for the long response, but let me try to give a scetch of it. I am having trouble figuring out a few of the detailed boundary conditions as finance people tend to be sloppy when defining them, but here is the basic setup.

OK, lets talk about discretizing the L for a particular process before we talk about the boundary conditions. Define the x on a grid of M points, and let dx=1 just to be make things easier. A big question is what the size of the extension is, i.e. M_bar.

Warmup with processes we already know:

Now for the trickier example: Lets say that the stochastic process for X_t is a jump diffusion. For it we will compose L_s = a L_1 + sigma^2/2 L_2 + lambda L_J where a < 0, L_1 is backwards first differences, sigma is the variance of the diffusion process, L_2 is the stencil for central differences for the second differences, lambda is the poisson arrival rate of the jump process, and L_J is the generator of the jump process.

So what is the size of M_bar? We don't know yet, but we do know it is at least M + 2 from the L_2 one.

As an example jump process, assume that things are such that upon an arrival, there is a 50% chance you jump forward by 2 and 50% chance you jump backwards by 3. Since we assumed that dx=1 in the grid, this means we can jump three grid points back or two forward. Let M=3 then I am pretty sure that the generator is of size (3+5)X M and I think it is

L_J = [.5 0  0 -1  0 .5  0  0
       0 .5  0  0 -1  0 .5  0
       0  0 .5  0  0 -1  0 .5]

Finally, to compose the whole jump diffusio, now that we know the size is M + 5 we have to pad the left and right side. Some while the original first differences was something like

L_1 = [1 -1  0  0
       0  1 -1  0
       0  0  1 -1]

we can pad as

L_1^ex = [zeros(3,2) L_1 zeros(3,2)]

and similarly

L_2 = 0.5*[1 -2  1  0  0
           0  1 -2  1  0
           0  0  1 -2  1]

and

L_2^ex = [zeros(3,2) L_1 zeros(3,1)]

.... now I could have made a mistake somewhere in all of this, but I hope you get the point.

As for the boundary conditions, the ones I am interested in are linear and affine but hopefully this gets my point across on why composition and boundary conditions is tricky. As for the number, we need to pin down 5 numbers.

One example of a boundary condition is a reflecting barrier. This is u'(x_bar) = 0 for a diffusion process, but I am not 100% sure how to write it for a jump-diffusion. But if there was a reflecting barrier at the top then I think that this gives two equations and they are something like the following B:

[0 0 0 0 0 1  0 -1
 0 0 0 0 0 1 -1  0]

The other, trickier, boundary conditions are absorbing ones that are affine. The standard setup in finance and economics is that there is a stopping point at the boundary with a particular nonzero payoff. I think what you need to do there is define the payoffs crossing the boundary. So if our grid was x = [0 1 2] with the dx and the 3 jumps back, 2 jumps forward, then we need to define a payoff at x = [-3 -2 -1] given the 3 jumps backwards. These would give our 3 final equations for the setup. I am trying to write that boundary condition formally, but it will take time.

CAVEAT: I am pretty sure I made errors in the actual matrices, but I think I have all of the sizes correct.

dlfivefifty commented 6 years ago

I'm confused by your explanation: my understanding is that one typically starts with a stochastic process, generates a continuous Fokker Plank PDE, then discretises the PDE with finite differences. It sounds like you want to jump directly from the stochastic process to the finite differences, and skip the continuous PDE?

In terms of the continuous PDE, the description of the jump process doesn't sound right, as it's described in terms of jumps on the (artificial) grid.

I also don't understand "reflecting": this makes sense for wave equations and is equivalent to Neumann, but when you don't have waves, what does it mean to reflect?

jlperla commented 6 years ago

I'm confused by your explanation: my understanding is that one typically starts with a stochastic process, generates a continuous Fokker Plank PDE, then discretises the PDE with finite differences. It sounds like you want to jump directly from the stochastic process to the finite differences, and skip the continuous PDE?

This is not the Fokker Plank (which is the Kolgorov Forward equation) but rather it is a backwards equation (coming from things like the Feynman–Kac formula if it is a diffusion). But you can write the Feynman-Kac formula more an arbitrary infinesimal generator.

Sorry if my notation and steps were unclear, but I was trying to skip a few steps in going from the stochastic process to the finite differences for brevity. Think of this as starting with the infinitesimal generator of the process embedded in the hamilton jacobi-bellman equation. https://github.com/JuliaDiffEq/PDERoadmap/releases/download/v0.2.1/linear_operators_overview.pdf is more explicit on this for (non-jump) processes.

In terms of the continuous PDE, the description of the jump process doesn't sound right, as it's described in terms of jumps on the (artificial) grid.

I was just trying to rig up an example where it was easy to figure out the stencil because the jump sizes lined up with the exact grid to keep things so I could write it out easily. More generally you would have to give the continuous jump distributing and discretize that. Sorry if I made things more confusing, but I think the stencil is something along those lines.

I also don't understand "reflecting":

Reflecting in the sense of: https://math.nyu.edu/~varadhan/Spring11/topics16.pdf But I still am having trouble finding how the finance guys write it down formally for jump-diffusions.

dlfivefifty commented 6 years ago

I was trying to skip a few steps

Skipping steps is never a good idea when talking to a mathematician 😉

Reflecting in the sense of: https://math.nyu.edu/~varadhan/Spring11/topics16.pdf

I see, it's "reflecting" in the sense of Brownian motion, not waves. Conveniently, both end up with Neumann.

If the jump stays away from boundary, then shouldn't it still be Neumann conditions?

It sounds like it would be convenient for people in finance to be able to specify things as an SDE and have the generation of the PDE happen automatically. Is this accurate?

jlperla commented 6 years ago

If the jump stays away from boundary, then shouldn't it still be Neumann conditions?

Probably. But the issue is that in general (and in most noncontrived examples) they wouldn't.

It sounds like it would be convenient for people in finance to be able to specify things as an SDE and have the generation of the PDE happen automatically. Is this accurate?

I think finance people might like that (as a higher level library on top of the DiffEqOperators) but I don't always understand their tastes and applications. As an economist I would rather start with the infinitesimal generators of the stochastic process, which means composing differential operators. The other thing is that 90% of my applications are solving a stationary ODE in the spatial dimension rather than a PDE.