AMReX-Combustion / PeleC

An AMR code for compressible reacting flow simulations
https://amrex-combustion.github.io/PeleC
Other
162 stars 73 forks source link

Possibility of doing a U-turn case with EB #161

Closed asmunder closed 3 years ago

asmunder commented 4 years ago

I am looking at running a case in PeleC (or maybe PeleLM) that is essentially a U-turn, as illustrated below. The sketch is a 2D slice, this shape would be extruded in the direction normal to the screen.

bilde

Is this possible? I don't see any challenges in creating the geometry with EB per se, but I'm not sure if it's possible to get the blue wall to be an inlet and the red wall to be an outlet? (A backup plan would be to use the green wall as the outlet, using a couple more EB blocks, but then we need to think about whether this changes the case too much for us.)

drummerdoc commented 4 years ago

You should be able to do this. The challenges with the geometry are if the splitting plate is too thin, it will severely limit the number of coarsenings that can generated, which will be serious difficulty with PeleLM. We are working that issue now, in fact. I believe the inflow and outflow should be doable without too much issue.

emotheau commented 4 years ago

It should be easy in terms of EB modelling, however you definitely need NSCBC to do inflow/outflow, especially on the same domain boundary.

asmunder commented 4 years ago

Thank you both for the prompt responses, good to hear that you think this should be feasible. I agree that we probably need NSCBC here with PeleC. We will probably also need AMR, in which case we may have to implement NSCBC in the new C++ stuff, ref. other issues.

Could you expand on the serious difficulty with PeleLM if there are not enough coarsenings, @drummerdoc ? I take it you mean if the splitting plate is very thin, it requires the base resolution of the mesh to be very fine for EB to resolve it - there is no refinement for the EB surfaces? Or did you mean something else?

@emotheau with reference to the Fortran version of PeleC with NSCBC, am I correct in understanding that it should "just work" if we make modifications in the pc_hypfill routine of bc_fill_nd.f90 such that the bcnormal routine gets called with different values for the sgn variable depending on the spatial coordinate?

drummerdoc commented 4 years ago

We are just now getting into the issues of thin bodies (we knew long ago this would be an issue, but have moved incrementally toward it by working the easier issues first).

Generically, the problem is related to the dependence of the LM algorithm on linear solvers (for the nodal and mac projections, and the scalar and tensor diffusion solves). Until now, we have relied exclusively on geometric multigrid (GMG) because of its compatibility with AMR meshing, and its optimal scalability. GMG requires the generation of a hierarchy of coarsenings of the grid structure one is solving on, and for subcycled algorithms, such as what is used in PeleLM, this presented some difficulties... one generates the coarsenings for the BoxArray at that level by coarsening each box. The hierarchy depth for single-level solves is then limited by the smallest box, and thus the "bottom solve" problem can end up being quite large, requiring a great deal of communication, etc. That issue is workable...we've been dealing with that for years. Larger "blocking factors" waste some grid, but generally provide enough MG levels for the problem to be solved reasonably well.

With EB, a new set of issues arise. When there are "thin bodies", a volume and connectivity-preserving coarsening eventually leads to the situation where there is more than one cut cell at a single IntVect location on the mesh - this is the "multi-cell issue". While there are data structures in AMReX to accommodate this situation, most of the algorithms don't yet properly account for it...the data is managed in such a way that depends on it being logically rectangular. Thus, the GMG coarsening is additionally limited by when multi-cells first appear, which in turn is a function of how "thin" the boundary structures are. Even worse...currently, the discretization for the nodal solve depends on using "covered" nodes adjacent to cut cells. As a result, there has to be two covered nodes between two cut cells at a thin structure. As you can see, this will rear its head in your case for sure.

All this is to say that as things are currently organized, we have to explore alternative linear solvers when the GMG "bottom solve" problem is too large due to thin bodies. And, unfortunately it is not clear that anything compares well to GMG. We have coupled the solvers to call Hyper's BoomerAMG (algebraic multigrid) but the performance seems to be ... well, lacking. That said, we are just starting to look into this from multiple angles...including possibly rethinking the nodal stencils to reduce the requirements there, and at other strategies to manage the solvers.

TMI?

emotheau commented 4 years ago

@asmunder If you look at Hydro.cpp, the actual nscbc code is commented, but I suppose that if you can uncomment it, it will work. However I am also discovering that the whole bcMask machinery to implement mixed boundary conditions has disappeared, especially in the Riemann solver. Are you comfortable about reactivating that? I guess it basically consists of doing copy/paste from the old fortran that has been ignored in the new cpp. If not, maybe you can ask the current PeleC developers ? Otherwise I can take a look at it on my "free time", but not right now unfortunately.

For PeleLM, what is exactly the ratio between the thickness of the plate and the whole domain? The problem is not really the physical thickness by itself, but if it requires you to start from an already very fine level 0, which would be detrimental for the AMR paradigm. If your physical configuration allows to have some freedom about this thickness, you can make it larger so that you keep a dimensional ratio not too crazy. I am more concerned about the inflow/outflow on the same domain boundary, I'm not sure how the linear solvers can handle internally mixed BCs.

drummerdoc commented 4 years ago

For PeleLM, the linear solvers would need to play tricks with the "transport coefficients" (I.e. for diffusion, the actual coefficients, and for the projection the 1/rho in Div(1/rho Grad(phi)). The trick is that in places where the boundary should be a Neumann condition, this can be achieved either directly or by zeroing the transport coefficient there. So, a boundary that is mixed Dirichlet on some parts and Neumann on others can be formally set as a Dirichlet boundary and where it is Neuman the transport coefficients are zeroed. I do not believe that this trick has been pulled into the projections (nodal or Mac), but it seems doable (unfortunately, the interface takes the rho values rather than 1/rho, so we'll have to do some surgery to get the zeros in there).

In @asmunder 's case, the inflow part is Neumann and the outflow is Dirichlet for the projection. We would need to implement this hack before your red and blue boundary case would work with IAMR or PeleLM.

asmunder commented 4 years ago

Thank you both again for the very good info (not TMI, perfect level of detail for understanding this stuff).

I have to check next week with my colleagues the width of the plate; it definitely has a substantial thickness in reality, but it could be that we can play around a bit with it. Do I understand correctly that a plate width of 10% of the y-direction would mean that the biggest cell created by the coarsening would be 5% of the y-direction? Would this be prohibitively small for GMG efficiency?

And do I also understand correctly that for PeleC, this setup would be easier to accomplish both with respect to the thin structures and the boundary conditions, and it would "just" be a question of re-implementing NSCBC and the bcMask stuff in the C++ version? We are kind of borderline between low Mach or not in this case, so the penalty timestep-wise of going with PeleC could be acceptable.

drummerdoc commented 4 years ago

This is the the absolute minimum thickness that would work. Here, blue is the body, red are cut cells in the fluid, and yellow are covered nodes (in the body). So, in creating GMG levels, it would coarsen until it hit this situation and be able to go no further. The code will assume all is well, but this might define a "bottom solve" problem that is large, thus requiring a huge number of relaxations to solve. If your "fine" grid in the hierarchy doesn't allow access to one distinct covered node from each side, or if it results in two or more cut cells at the same index, the solvers will fail (actually, the multi-cell situation will fail at the geometry generation stage).

Screen Shot 2020-08-07 at 10 17 06 AM

rmrsk commented 4 years ago

Coming in from the left here: I talked with @asmunder about some of this stuff earlier today and I just caught up with the full thread. If the two sides of the plate falls on each side of the symmetry plane in the figure you won't generate multicells. The top half of the domain only sees one interface and likewise for the bottom half, but since cells on opposite sides of the symmetry plane aren't merged during coarsening, all you have to do is keep the two sides on separate sides of the symmetry plane. The sharp end of the plate might generate multi-cells depending on where in the domain it ends and how you round off your edge. But to me it looks like rounding it off with a hemisphere (cylinder?) is sufficient to avoid multicells.

@drummerdoc - did I understand you correctly when you said that the deeper multigrid levels are created only through BoxArray coarsening? Is this specific for single-level solves? We've had success with composite solves in Chombo by using a strategy where we first generate those levels through decomposition with the blocking factor, and then switch to box coarsening when further decomposition becomes impossible.

I have no idea what goes in the nodal solver, but this does seem to be the major limitation in this case.

drummerdoc commented 4 years ago

So, recall that it is not only multicells that one needs to avoid. In PeleLM (and IAMR and incflo) where a nodal solver is required, the current method requires the use of one covered node one each side. So, yes, you can play games to get the minimum coverage, but you'll still run into coarsening issues for GMG. At the present time, unfortunately, there is a lot to be gained potentially by small shifts in the geometry around symmetry lines, etc.

When the integration algorithm sub cycles, then we need to do "level solves" for the Mac, nodal and diffusion solves. Here, the coarse-fine boundary provides Dirichlet data for the solve, which is over a union of boxes. We do that by coarsening the BoxArray of the original problem spec, but there are special cases where you could easily do much better. If you are not subcycling (such as with incflo and MAESTRO) all linear solves are over the entire hierarchy. In that case, there are far less limitations for the coarsening strategies, since eventually you can coarsen at least to the coarsest AMR level (and hopefully much further). In that case, there are many tricks available including merging grids, reducing the number of involved processors, etc. All that is incorporated in the MLMG solver in AMReX. But this distinction between level solves and full hierarchy solves is a strong motivation to avoid subcycling...but just one in a number of pros and cons when looking at the best approach for a particular problem.

I should also say that there is some current effort to remove the covered-node requirement in the nodal solve, but that is ongoing. There is also some work in switching to algebraic multigrid (such as the BoomerAMG in hypre) for large bottom-solve problems, but in the context of frequent regridding AMG, which requires substantial setup costs, hasn't yet been hugely promising for large problems. All that is ongoing development.

asmunder commented 3 years ago

Just to update on this: we are running this case and it seems to work well with PeleC and NSCBC at inlet/outlet.

I have one final question, and that is how to enforce different BC values for temperature on different parts of the EB geometry - is this currently supported, or would we need to implement it? I have had a poke around the PeleC_init_eb.cpp routines where temperature is set, and it was not obvious how to proceed.

Please go ahead and close this issue, if you want.

drummerdoc commented 3 years ago

Thanks for the update. The ability to set spatially-dependent boundary values is supported by the library but we have not yet implemented a scheme to manage it in the Pele codes. I have some ideas involving building a local set of coordinates but I haven’t had a chance to try them out. In the meantime there should be a way we could provide a generic function of space to the user for this. Let me know if you have ideas as well.

Thanks again! Marc

asmunder commented 3 years ago

Thanks for the quick response. Yes, I saw something in the AMReX docs about homogeneous vs. inhomogeneous Neumann. From my specific perspective, a function of space would be all that is needed. E.g. in PeleC_init_eb.cpp, where the temperature boundary condition is set like this: (NB. I'm referring to the "old" PeleC version here, we haven't switched over yet.)

      if (eb_isothermal && (diffuse_temp != 0 || diffuse_enth != 0)) {
          sv_eb_bcval[iLocal].setVal(eb_boundary_T, cQTEMP);
      }

Here it would be very nice for a user to just do

auto *myTemperatureFunction = +[](std::array<float,3> x) { if (x[1] < 2.0) { return 750.0 } else {return 1200.0 } ; };

and be able to pass the myTemperatureFunction function pointer as the first argument to sv_eb_bcval[iLocal].setVal(), and that routine would call this function internally - but I have no idea how easy or difficult that would be to implement, just thinking out loud here, and I could easily be barking up the wrong tree.

asmunder commented 3 years ago

Final update: the u-turn case works fine with PeleC. Closing this and opening a new issue for the more general topic of setting varying boundary conditions on the embedded boundaries.