Closed marchdf closed 3 months ago
I think I need more context to explain why it's needed in one and not the other. @cgilet can you help?
This is part of the chat to avoid needing to repeat:
In IAMR, since we aren't divergence-free on the composite grid, we have to do something to the ghost cells to enforce the divergence constraint. It's a rather "simple" technique of absorbing the error on the the face that furthest from the coarse fine boundary if that makes sense. There are some case for more complex grid structures where it doesn't work and we're left not enforcing the divergence constraint in a few edge/corner ghost cells potentially
Feel free to add more details.
That makes sense. Want to remove it (or at least comment out with Candace's explanation pasted in for the curious?)
At the time I put that code in, it was needed in LMeX. Without it I could see all kind of CF weirdness. I'll have to re-run some test to see if some things changed or the way it's done in incflo (it wasn't handled initially in incflo).
yeah it's not in incflo and amr-wind. It worries me that you thought it was necessary here. It means that maybe I need it in amr-wind? Let me know if you figure out why it is needed here.
Running LMeX convected vortex case with AMR. The left plot uses the umac with the divergence corrected in the ghost cell (current LMeX code), the right plot is without the correction, returning right after the FillPatchTwoLevel.
From what I remember: the way the scalar advection is done is that it uses the divergence of umac in the ghost cells at some point, not the cell centered divU and because the FillPatchTwoLevel didn't conserve the divergence in the ghost cells, it introduces error (in this case it screw the mass balance because divU should be zero but it's not).
I'll keep diggin, maybe FillPatchTwoLevel can be made to handle one layer of ghost cells.
I agree with Candace comment, the code to "fix" the divergence in the ghost cell works for simple cases (i.e. easy BoxArray) but is not handling well corner cases.
Something isn't adding up here. The MAC projection just enforced the divergence constraint on the composite grid (to the tolerance), and then div_free_interp filled the ghost cells according to an interpolation that preserves that divergence. So either
I will say that recently I noticed a subtle algorithmic difference between IAMR and incflo. What IAMR does is predict according to what we want, and then, if needed, compute the final convective term according to what we numerically have. Meaning, if we have divU=S
, then S
is passed into ComputeEdgeState and if div(Uq)
is the desired form then it's done. If U dot grad q
is desired, then divU = amrex::computeDivergence(Umac)
is used to formulate U dot grad q = div(Uq) - q divU
.
What incflo does (and I'm assuming PeleLMeX also) is always use computeDivergence(Umac)
. I have a use case where incflo fails, but IAMR is fine. There's a PR in the works...
@esclapez I believe 1 divU ghost cell is used within Godunov in 3D, but for 2D the ghost cells are only needed for convective differencing (which is the default for velocity in incflo, but obviously not used for density). So, your example here is 2D, right? I think the error must ultimately be coming from errors in velocity advection and not the scalar adveciton here.
@esclapez I'm looking into whether div_free_interp is working as expected in incflo. let me know if you make any progress with what's going on in PeleLMeX
AMReX PR #4039 fixes the above example. @esclapez do you know of any other cases where the divfree interpolation wasn't working?
@esclapez can you share/or point to the case that you used to show the problem? I would like to check it with #403. Or maybe @cgilet you have the case already?
It's the CoVo problem, 2 levels, problem starts around step 455, IIRC
Ok I am running input.2d_CoVoAMR
with #403 and latest amrex. And I see:
I also see that with the development
branch of PeleLMeX so I think every thing is good.
Can you set the min/max so that I can see if that's real?
It is not "real". Just noise around a constant, which is what we want apparently:
density -- Min = 1.17985 (zone 431 in patch 2 at coord <0.496094, 0.480469>)
density -- Max = 1.17985 (zone 625 in patch 2 at coord <0.511719, 0.527344>)
A couple extra significant figures would confirm the magnitude of the noise. From Lucas's plots, the variation is in the 6th decimal place for the uncorrected case, but not for the corrected case. But the pattern of noise in Marc's image above looks like the correct version, so presumably the magnitude is also much smaller than 1e-6.
Here are digits:
density -- Min = 1.179849656456941 (zone 431 in patch 2 at coord <0.49609375, 0.48046875>)
density -- Max = 1.179849656456945 (zone 625 in patch 2 at coord <0.51171875, 0.52734375>)
In chasing down something in amr-wind (a non-subcycling code), I was looking at all the stuff happening in create_constrained_umac_grown. Specifically this line and below: https://github.com/AMReX-Combustion/PeleLMeX/blob/development/Source/PeleLMeX_UMac.cpp#L270. This stuff is present in IAMR (subcycling) as well (in some similar form): https://github.com/AMReX-Fluids/IAMR/blame/development/Source/NavierStokesBase.cpp#L1159. It is not present in incflo (non subcycling).
In chatting with @cgilet, it looks like this stuff is needed for a subcycling code but not a non-subcycling code. Therefore this stuff is probably not needed for PeleLMeX? Is this a relic of transforming from LM (subcycling) to LMeX (non-subcycling)? Do @nickwimer, @drummerdoc, @esclapez have thoughts on this?