Closed anriseth closed 7 years ago
Dx*F
is x derivatives, F*Dy
is y derivatives. By hand do the calculation for the discretized Laplace and this becomes apparent (or you might have it the other way around, depending on x as rows or columns)
So you mean I can use A = LinearOperator(1,2,...)
, and then A*F
is x-derivatives and F*A
is y-derivatives? (And similarly for higher-order...)
That's neat, thank you.
Yes. And that's if x goes down columns and y goes across rows. If that's swapped, just swap to F*A
and A*F
.
Hah, this was actually obvious the second one thinks linear algebra...
I've tried this now, and have some problems:
A
is not fully symmetric. So F*A
won't work without a proper transposition on A
A*F
and F*A
calls operations from LinearMaps as opposed to what happens with A*x
for x
a vector.
julia> @which(F*A)
*(A1::AbstractArray{T,2} where T, A2::LinearMaps.AbstractLinearMap) in LinearMaps at /home/asbjorn/.julia/v0.6/LinearMaps/src/wrappedmap.jl:41
F*A
do not work:
using PDEOperators
xarr = linspace(0,1,51)
yarr = linspace(0,1,101)
dy = yarr[2]-yarr[1]
F = [x^2+y for x = xarr, y = yarr]
B = LinearOperator{Float64}(2,2,dy,length(yarr),:None,:None)
Issues like these come out of the fact that these are discretization operators meant for efficient PDE solving, not necessarily tailored to general numerical differentiation. The math is the same, but the API preferences diverge to some extent. That's why I've always postulated that when all is said and done, we probably want a separate package for numerical differentiation that shares a common core with the calculations currently done here.
The one-sided issue is simple - we should ultimately let the users choose whether they prefer sacrificing symmetry or approximation order, as you can't have both. The rest are things we should discuss in terms of how we'd like the final APIs (probably plural) to look and then structure the code accordingly.
@dextorious explained it nicely. If you check the operator that comes out from LaPlace 2,2, it won't have all of the properties you want with those BCs by design. The discretization error is O(dx^2) though, so it does converge to no error. But these are not "structure-preserving" operators: these are efficient discretizations, which are very different. If you're looking for structure-preserving techniques, that's on the wishlist but not coming soon.
The multiplication operator for AF and FA calls operations from LinearMaps as opposed to what happens with A*x for x a vector.
That's a mistake that should be corrected. @shivin9 we need the other operator defined. Since the matrix is symmetric this should be pretty easy.
@ChrisRackauckas Yeah I saw that. If F
is multidimensional though then should we recursively apply the operator for every lower dimensional structure till the time we reach it's rows?
I will correct it for 2D domains anyways.
What is the easiest way to do partial derivatives with these operators? Say I have an array
F = [f(x,y) for x in xarr, y in yarr]
, and would like to approximate\partial d/\partial x
with some linear operator, i.e.F_x = Dx*F
.Should I do
map
or broadcast along rows/columns (depending on \partial_x or \partial_y)?