su2code / SU2

SU2: An Open-Source Suite for Multiphysics Simulation and Design
https://su2code.github.io
Other
1.33k stars 841 forks source link

Possible correctness issue in how turbulence equations are discretized #721

Closed pcarruscag closed 5 years ago

pcarruscag commented 5 years ago

While working on #718 it was brought to my attention by @economon that CTurbSolver makes its own interpolation / reconstruction of bulk flow variables when computing the convective fluxes. This is mentioned in #403 and #422 but my concern is different.

This reconstruction is not consistent with the one used by the flow solver (except in some very special circumstances) which means the convective fluxes considered for the turbulence variables do not respect the discretized continuity equation.

Now, have I missed something and this is being compensated by some source term? If I have not missed anything, I think the solution is relatively straight forward, one simply stores the convective fluxes computed in CEulerSolver::Centered/Upwind_Residual to later use in CTurbSolver.

I would be happy to implement this but do let me know if I got something wrong folks.

clarkpede commented 5 years ago

As far as I know, there is no compensation in the source terms. The source terms are implemented pretty closely to how they appear in published (analytical) equations.

pcarruscag commented 5 years ago

So I tested this on 4 mesh levels for a NACA0006 at 2.0 degrees AoA, at low (0.6) and high-ish (0.8) Mach number (Roe scheme). These are the results for low Mach: image Very small differences between recomputing a mass flux based on primitives ("Reconstructed") or storing the flux computed during discretization of convection ("Consistent"). However, the convergence rate for the latter approach is much worse: image Which makes sense because we are going from a Gauss-Seidel coupling of flow and turbulence to a half GS, half Jacobi (since the turbulence source terms were still computed with current velocity gradients). After seeing this I only ran one mesh level (second to finest) at high Mach number and again differences were very small and convergence worse. Some memory would indeed be saved in the discrete adjoint through the reduction of the number of pre-accumulation input variables, but only 30MB out of almost 9GB for a 2D case without MG.

In summary the current approach seems to strike a good balance between accuracy, cost, and simplicity.