Open ranocha opened 2 years ago
Regarding the analysis callback, when we call, e.g., https://github.com/trixi-framework/Trixi.jl/blob/db25cc3eaf250d377ffde4f51b9374237f778da8/src/callbacks_step/analysis_dg2d.jl#L198
is the du
here only the rhs
from the hyperbolic solver or would it already contain the sum of hyperbolic and parabolic?
du
is set as
https://github.com/trixi-framework/Trixi.jl/blob/db25cc3eaf250d377ffde4f51b9374237f778da8/src/callbacks_step/analysis.jl#L222
so it should be the combined RHS, I think.
The analysis callback reports the hyperbolic equation as the system being used, e.g.,
Simulation running 'CompressibleEulerEquations2D' with DGSEM(polydeg=3)
This should be specialized.
Hi, what is the status on AMR & Mortars? Is someone currently working on this, or is there still some theoretical work needed to get e.g. the Mortars for parabolic terms right? If it is just for the implementation, I would offer working on this.
As far as I know, nobody is working on that (except maybe someone around you, @jlchan?). The L2 mortars should work fine for parabolic terms - I think the same approach was/is used in FLUXO.
Thus, it would be great if you'd consider working on this - it would greatly increase the applicability of AMR for realistic flow setups 👍
I can only second what @sloede has said - it would be a great contribution!
@DanielDoehring @sloede parabolic mortars are on me and @apey236's list, but we were going to focus on MPI parallelization first. If you are interested in adding mortars for parabolic terms, I would be happy to help!
So I tried to come up with copy-paste based version originating from the mortars for purely hyperbolic equations (see https://github.com/trixi-framework/Trixi.jl/pull/1571).
As spelled out in the conversation of that draft PR https://github.com/trixi-framework/Trixi.jl/pull/1571#discussion_r1265230939 , https://github.com/trixi-framework/Trixi.jl/pull/1571#discussion_r1265233269
I could use some support/understanding in the calc_gradient
function - did you implement this originally @jlchan ?
Hi @DanielDoehring - yes, I implemented this originally in DGMulti
, with @sloede and @ranocha's help for TreeMesh
. Let me take a look at that PR and see if I can respond.
Regarding AMR: My very first steps foreshadow that the splitform of the ODE might complicate things, thus I would prefer working with the standard form first.
Ideally, we could try to get OrdinaryDiffEq
do the work which should be somehow possible - otherwise, if somehow not, we might need to combine rhs!
and rhs!_parabolic
into something like rhs_hyperbolic_parabolic!
ourselves.
Can you explain briefly what makes it hard to use SplitODEProblem with AMR?
No, not really - for this I lack proper understanding of what is going on, which is of course easier if I could only change thing at a time.
What I observe is, however, that after the grid gets changed the boundscheck
fails when called from
which is in turn called from
function (f::SplitFunction)(du, u, p, t)
f.f1(f.cache, u, p, t)
f.f2(du, u, p, t)
du .+= f.cache
end
which resides in SciMLBase.jl/src/scimlfunctions.jl
.
The error will be in the AMR part, it is just a bit hard to track down where they originate since one has to navigate pretty much through the entire SciMLBase
, OrdinaryDiffEq
and DiffEqBase
jungle.
Could you please create an MWE of a SplitODEProblem
, setup an integrator
, and resize!
it? I guess some caches are not resized correctly or something like that.
Could you please create an MWE of a
SplitODEProblem
, setup anintegrator
, andresize!
it? I guess some caches are not resized correctly or something like that.
Yes, that is probably what is going on - currently only the cache
of the hyperbolic part is passed on
that needs to be adapted to incorporate (at least) the cache_parabolic
as well when we pass semi
of type SemidiscretizationHyperbolicParabolic
.
Yes, that's the fix we need to make :+1:
Could you please create an MWE of a
SplitODEProblem
, setup anintegrator
, andresize!
it? I guess some caches are not resized correctly or something like that.
MWE is not so easy to provide, but there is certainly something wrong with the resize!
function from OrdinaryDiffEq
for problems with splitted RHS.
While debugging I changed this
to
if has_changed
println("Target length u_ode: ", length(u_ode))
println("f.cache pre-change: ", length(integrator.f.cache))
resize!(integrator, length(u_ode))
println("f.cache post-change: ", length(integrator.f.cache))
u_modified!(integrator, true)
end
which outputs
Target length u_ode: 1024
f.cache pre-change: 4096
f.cache post-change: 4096
which is the reason why the boundscheck
https://github.com/trixi-framework/Trixi.jl/blob/375384659cb57a80e253c5e685db8ec298e30d8c/src/solvers/dg.jl#L538-L541
fails for rhs_parabolic!
which is passed as f1
https://github.com/trixi-framework/Trixi.jl/blob/67d137d6712f28bbe99ffef3d003afe96c47aade/src/semidiscretization/semidiscretization_hyperbolic_parabolic.jl#L286
and calls thus rhs_parabolic!
with unmatching dimensions of du_ode
and the contents of the cache
function (f::SplitFunction)(du, u, p, t)
f.f1(f.cache, u, p, t) # Non-resized cache is put into f1!
f.f2(du, u, p, t)
du .+= f.cache
end
For the moment, I avoid the splitfunction entirely by basically replicating the erronous code above in a bug-free manner via
function semidiscretize(semi::SemidiscretizationHyperbolicParabolic, tspan; split_form::Bool=true)
u0_ode = compute_coefficients(first(tspan), semi)
# TODO: MPI, do we want to synchronize loading and print debug statements, e.g. using
# mpi_isparallel() && MPI.Barrier(mpi_comm())
# See https://github.com/trixi-framework/Trixi.jl/issues/328
iip = true # is-inplace, i.e., we modify a vector when calling rhs_parabolic!, rhs!
# Note that the IMEX time integration methods of OrdinaryDiffEq.jl treat the
# first function implicitly and the second one explicitly. Thus, we pass the
# stiffer parabolic function first.
if split_form
return SplitODEProblem{iip}(rhs_parabolic!, rhs!, u0_ode, tspan, semi)
else
specialize = SciMLBase.FullSpecialize # specialize on rhs! and parameters (semi)
return ODEProblem{iip, specialize}(rhs_hyperbolic_parabolic!, u0_ode, tspan, semi)
end
end
function rhs_hyperbolic_parabolic!(du_ode, u_ode, semi::SemidiscretizationHyperbolicParabolic, t)
@trixi_timeit timer() "hyperbolic-parabolic rhs!" begin
du_ode_hyp = similar(du_ode) # TODO: Avoid allocations, make member variable of something?
rhs!(du_ode_hyp, u_ode, semi, t)
rhs_parabolic!(du_ode, u_ode, semi, t)
du_ode .+= du_ode_hyp
end
end
which actually works (currently 2D only).
I will try to report this bug in the OrdinaryDiffEq
package, but it is not so easy to reproduce this since I am not quite sure how to get my hands on an object of ODEIntegrator
(i.e., not a solver/algorithm).
how to get my hands on an object of
ODEIntegrator
(i.e., not a solver/algorithm).
integrator = init(ode, alg; kwargs...)
instead of sol = solve(ode, alg; kwargs...)
Having https://github.com/trixi-framework/Trixi.jl/pull/1629 merged we can tick-off mortars & AMR for TreeMeshes
Having merged https://github.com/trixi-framework/Trixi.jl/pull/1765 AMR for 3D parabolic terms on p4est meshes is now also available.
I would like to pick up the viscous CFL business. I would be grateful if someone could point me to
The CFL (both advective and viscous) are similar to the FLEXI implementation. Have a look at Section 3.5 of the FLEXI code paper
This is basically taken from #1136. Currently, our idea is to merge the
DGMulti
setup from that PR intodev
, draft a correspondingTreeMesh
&DGSEM
version in another PR, and see how it looks for Navier-Stokes. When that's okay, we can go ahead and mergedev
intomain
to finally get support for parabolic terms in Trixi.jl.TreeMesh
version withDGSEM
TreeMesh
P4estMesh
P4estMesh
StructuredMesh
jump(u)
orientation
(i.e., computex
andy
components separately)?Gradient
/Divergence
boundary conditions in docsgrad_u
(current) vs.gradients
orgradients_u
no_boundary_condition
(current) vs.boundary_condition_do_nothing
or something else (what?). Answer:boundary_condition_do_nothing
ViscousFormulation***
but this gets pretty long.elixir_navier_stokes_xxx.jl
toelixir_navierstokes_xxx.jl
for consistency (e.g., withelixir_shallowwater_xxx.jl
).StepsizeCallback
with viscous CFL condition