NaluCFD / Nalu

Nalu: a generalized unstructured massively parallel low Mach flow code designed to support a variety of open applications of interest built on the Sierra Toolkit and Trilinos solver Tpetra solver stack. The open source BSD, clause 3 license model has been chosen for the code base. See LICENSE for more information.
https://github.com/NaluCFD/Nalu
Other
141 stars 66 forks source link

Clarity in matrix reuse/recompute options #120

Closed spdomin closed 7 years ago

spdomin commented 7 years ago

Recent questions regarding the reuse and recompute options for matrix management in Nalu have been raised.

In general, this option is germane to the continuity system. The low-Mach PPE solved is as follows:

D tau G delta_p = -res

tau is a time scale and D and G are the divergence and gradient operator; res above might include a mass matrix, M(rho). In the case of a time step scaling, tau = dt. Therefore, we can factor out this from the system to obtain:

DG delta_p = -res/dt

In an approximate projection step, DG != L. However, we make this approximation, hence the reason why we have stabilization:

L delta_p = -res/dt

For a mesh that has no topological changes (these occur due to adaptivity, sliding mesh or overset), L is constant over the full simulation. In such cases, we want recompute and reuse preconditioner to be false. Therefore, L prevails over the full system. Although we reassemble the system, we want to avoid the AMG setup cost to a single step over the full simulation.

Note that for the original system, D tau G delta_p = -res, the connectivity can remain the same while the Jacobian entries can change. In such cases, the connectivity is the same, however, the Jacobians are changing. Therefore, reuse should be true. Although tau is taken to be dt, the residual might contain a needed Jacobian entry due to M(rho). Specifically, in acoustically compressible flows in which dp/d(rho) !=0.

For meshes that change topology in Nalu due to sliding or overset, L is changing in time, however, fixed over the nonlinear loop. In this case, the EquationSystem calls reinitialize_linear_system() at the top of the time step. This deletes everything and sets the AMG preconditioner to NULL. In this case, we wan reuse and recompute set to false. This will guarantee that the system is reinitialized once at the top of the Picard loop.

Finally, for algorithms that allow for connectivity changes over the Picard loop, e.g., if we allowed for adaptivity within the iteration, we would want recompute to true.

I think we need some clarity is defining these options.

recompute is a hard reset at any time setMuelu is called. reuse is only when the LHS is changing entries under the context of fixed topology.

For sliding mesh simulations, we need recompute = reuse = false. This feels non intuitive and is only functional since the EqSys called reinitialize_linear_system().

I might change recompute to "forced_reset_any_time_setMuelu_is_called" or something like that:)

Of coarse, in the future we may want the lagging of LHS connectivity/values with an updated RHS..

spdomin commented 7 years ago

With the above theoretical description, conformation from @sthomas61 that reuse and recompute is working as expected for mesh motion (sliding mesh/overset) and the elemClosedDomain working as expected with the "reuse" option, I am closing this ticket.