AMReX-Combustion / PeleLMeX

An adaptive mesh hydrodynamics simulation code for low Mach number reacting flows without level sub-cycling.
https://amrex-combustion.github.io/PeleLMeX/
BSD 3-Clause "New" or "Revised" License
24 stars 32 forks source link

Unnecessary code after mac FillPatchTwoLevels? #386

Open marchdf opened 3 weeks ago

marchdf commented 3 weeks ago

In chasing down something in amr-wind (a non-subcycling code), I was looking at all the stuff happening in create_constrained_umac_grown. Specifically this line and below: https://github.com/AMReX-Combustion/PeleLMeX/blob/development/Source/PeleLMeX_UMac.cpp#L270. This stuff is present in IAMR (subcycling) as well (in some similar form): https://github.com/AMReX-Fluids/IAMR/blame/development/Source/NavierStokesBase.cpp#L1159. It is not present in incflo (non subcycling).

In chatting with @cgilet, it looks like this stuff is needed for a subcycling code but not a non-subcycling code. Therefore this stuff is probably not needed for PeleLMeX? Is this a relic of transforming from LM (subcycling) to LMeX (non-subcycling)? Do @nickwimer, @drummerdoc, @esclapez have thoughts on this?

drummerdoc commented 3 weeks ago

I think I need more context to explain why it's needed in one and not the other. @cgilet can you help?

marchdf commented 3 weeks ago

This is part of the chat to avoid needing to repeat:

In IAMR, since we aren't divergence-free on the composite grid, we have to do something to the ghost cells to enforce the divergence constraint. It's a rather "simple" technique of absorbing the error on the the face that furthest from the coarse fine boundary if that makes sense. There are some case for more complex grid structures where it doesn't work and we're left not enforcing the divergence constraint in a few edge/corner ghost cells potentially

Feel free to add more details.

drummerdoc commented 3 weeks ago

That makes sense. Want to remove it (or at least comment out with Candace's explanation pasted in for the curious?)

esclapez commented 3 weeks ago

At the time I put that code in, it was needed in LMeX. Without it I could see all kind of CF weirdness. I'll have to re-run some test to see if some things changed or the way it's done in incflo (it wasn't handled initially in incflo).

marchdf commented 3 weeks ago

yeah it's not in incflo and amr-wind. It worries me that you thought it was necessary here. It means that maybe I need it in amr-wind? Let me know if you figure out why it is needed here.

esclapez commented 3 weeks ago

Running LMeX convected vortex case with AMR. The left plot uses the umac with the divergence corrected in the ghost cell (current LMeX code), the right plot is without the correction, returning right after the FillPatchTwoLevel.

Screenshot 2024-06-08 at 19 07 24

From what I remember: the way the scalar advection is done is that it uses the divergence of umac in the ghost cells at some point, not the cell centered divU and because the FillPatchTwoLevel didn't conserve the divergence in the ghost cells, it introduces error (in this case it screw the mass balance because divU should be zero but it's not).

I'll keep diggin, maybe FillPatchTwoLevel can be made to handle one layer of ghost cells.

I agree with Candace comment, the code to "fix" the divergence in the ghost cell works for simple cases (i.e. easy BoxArray) but is not handling well corner cases.

cgilet commented 3 weeks ago

Something isn't adding up here. The MAC projection just enforced the divergence constraint on the composite grid (to the tolerance), and then div_free_interp filled the ghost cells according to an interpolation that preserves that divergence. So either

  1. The MAC projection didn't do it's job
  2. div_free_interp didn't preserve the existing divergence on the composite grid
  3. the advection scheme is really sensitive to small errors

I will say that recently I noticed a subtle algorithmic difference between IAMR and incflo. What IAMR does is predict according to what we want, and then, if needed, compute the final convective term according to what we numerically have. Meaning, if we have divU=S, then S is passed into ComputeEdgeState and if div(Uq) is the desired form then it's done. If U dot grad q is desired, then divU = amrex::computeDivergence(Umac) is used to formulate U dot grad q = div(Uq) - q divU.

What incflo does (and I'm assuming PeleLMeX also) is always use computeDivergence(Umac). I have a use case where incflo fails, but IAMR is fine. There's a PR in the works...

@esclapez I believe 1 divU ghost cell is used within Godunov in 3D, but for 2D the ghost cells are only needed for convective differencing (which is the default for velocity in incflo, but obviously not used for density). So, your example here is 2D, right? I think the error must ultimately be coming from errors in velocity advection and not the scalar adveciton here.

cgilet commented 2 weeks ago

@esclapez I'm looking into whether div_free_interp is working as expected in incflo. let me know if you make any progress with what's going on in PeleLMeX