glwagner / dedaLES

Large Eddy Simulation with dedalus
https://dedales.readthedocs.io
GNU General Public License v3.0
33 stars 11 forks source link

Split viscosity for stabilizing LES time stepping #16

Closed glwagner closed 5 years ago

glwagner commented 5 years ago

@wenegrat and I talked about implementing a `split viscosity' method for stabilizing time-stepping.

Essentially, we would like to allow the user to specify some constant 'spliting viscosity' (or diffusivity) in the construction of an EddyViscosityClosure that is added to the left side of an equation and subtracted from the right side.

For example, introducing diffusivity splitting into the buoyancy equation produces

b_t + κ_split Δb + ... = ... + ∇ · ( κ_sgs - κ_split ∇b)

@wenegrat has found this technique allows larger time-steps and increases numerical stability.

glwagner commented 5 years ago

@wenegrat looking at the code now, it seems an issue can arise if a user wishes to use some non-constant background velocity field around which to formulate the turbulent closure.

For example, see this line:

https://github.com/glwagner/dedaLES/blob/c904946815c1c87352d246c9e4a109d5b1833919/dedaLES/closures.py#L114

One option is to introduce additional keyword arguments into the add_substitutions function that allow the user to specify u_split, etc.

In that case, the function signature for add_substitutions becomes

def add_substitutions(self, problem, tracers=[], u='u', v='v', w='w', u_split='u', v_split='v', w_split='w', **kwargs):

It's not a perfect solution but perhaps necessary if you wish to use split your viscosity for a perturbation field only, rather than some full field that includes a fixed but non-constant-in-space background component.

Perhaps u_LHS is better than u_split. Or another name.

What do you think? Ideas?

kburns commented 5 years ago

This would be in addition to the molecular diffusivity which is already on the LHS? The idea is that it's stabilizing instabilities coming from under-resolving short decay times from the explicitly-integrated sub grid model? This may work to stabilize things, but it usually won't be an accurate approximation for that decay rate. Maybe this is ok though, if you think the sub grid model is sporadically giving super short decay times that we don't mind losing?

wenegrat commented 5 years ago

My thinking on this was motivated by: https://groups.google.com/forum/#!topic/dedalus-users/x4ExWg8PtUw

This was motivated by trying to reproduce setups like: https://journals.ametsoc.org/doi/10.1175/JPO-D-17-0269.1

I was struggling to get any sort of numerical stability for this setup when the SGS terms were integrated explicitly (tried a variety of different timesteppers, small timesteps on the order of 10^-2 seconds, different initial noise configurations, etc.). Splitting them as per the above discussion did the trick and seems to allow the model to run stably.

On Thu, Feb 14, 2019 at 2:04 PM Keaton J. Burns notifications@github.com wrote:

This would be in addition to the molecular diffusivity which is already on the LHS? The idea is that it's stabilizing instabilities coming from under-resolving short decay times from the explicitly-integrated sub grid model? This may work to stabilize things, but it usually won't be an accurate approximation for that decay rate. Maybe this is ok though, if you think the sub grid model is sporadically giving super short decay times that we don't mind losing?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/glwagner/dedaLES/issues/16#issuecomment-463818943, or mute the thread https://github.com/notifications/unsubscribe-auth/AGIDJzzYxVROK-x25snTcEabkQwAk4f7ks5vNd2CgaJpZM4a8Wx5 .

glwagner commented 5 years ago

@wenegrat says he has played around with setting strict restrictions on time-step with little success. So in general, yes, I think we willing to accept some error in exchange for numerical stability.

I’m honestly a little confused about this — does a too-large timestep causing numerical instability actually imply large errors? I’m under the impression that in some cases a numerical scheme might be unstable even for infinitesimal timesteps. But I am not exactly a sophisticate 😏

Jacob, checkout my question above. How do you implement splitting with non-constant backgrounds?

kburns commented 5 years ago

I'd say a common source of numerical instability in diffusive problems is from trying to explicitly integrate a quickly-decaying mode with time steps that are too large. This integration would be both accurate and stable if you resolved the decay timescale. Most (all?) explicit schemes will become unstable if you do not resolve the decay timescale. You can attempt to "restabilize" them by implicitly integrating a diffusive term and compensating for it on the explicit side. Typically this will stabilize things if the implicit decay is larger than the original explicit decay, but it likely won't be accurate (i.e. the mode may decay very slowly).

glwagner commented 5 years ago

Fair point. But does that mean that an implicit treatment of terms is always "implicitly" acknowledging a sacrifice in accuracy for the sake of stability?

In other words, if accuracy were our primary goal, we would not use implicit stepping because this could mask an inaccurate solution that would otherwise have warned of its inaccuracy by blowing up.

kburns commented 5 years ago

Basically yes, but the caveat is that the modes we're talking about are really the numerical eigenmodes of the linearized equations. If you think the numerical eigenmodes with the shortest decay times are spurious (i.e. a result of your discretization and non-physical), then you probably don't actually care about integrating them accurately. This is the case for instance with the linear diffusive terms -- the discrete Laplacians have numerical eigenmodes with very fast spurious decay rates, and we integrate them implicitly to stabilize these modes. This probably also means that we are not accurately tracking the decay of the smallest modes in the box in over-resolved DNS.

glwagner commented 5 years ago

Ah yes, the joys of over-resolved DNS!!

But ok, I see the subtlety of your point. For under-resolved LES, we probably do want to track the smallest modes in the box.

The question is whether the numerical eigenmodes are spurious. For the discrete Laplacian we know the ultra-fast eigenmodes to be spurious. For eddy viscosity LES closures, we haven't a clue as to the nature of the linearized, discrete eigenmodes. Yet, given the similarity of the eddy viscosity operator to the discrete Laplacian it is reasonable to suspect that something similar can occur, especially if, for example, the eddy viscosity is somehow nearly constant.

Perhaps some tests in practical cases will shed light on this issue...

wenegrat commented 5 years ago

Treating the eddy viscous terms implicitly is, as far as I know, fairly standard. Simply as a practical matter this gets around the issues inherent in trying to deal with these stiff terms explicitly. My impression is that energy removal at the small scales is really all that matters, and the 'accuracy' of how this is accomplished is often a secondary concern (for example the use of implicit LES relying on numerical dissipation).

Now how to do this within Dedalus is a separate question. The splitting method (thanks for implementing Greg!) appears useful, and in my own tests I've been able to get code to actually run stably at a reasonable time-step doing this. My read on this is that it might fall under what has been termed Explicit-Implicit-Null methods? (ie. https://www.sciencedirect.com/science/article/pii/S0021999114000400).

This splitting though as implemented is a bit challenging as it requires an a priori guess at what the linear viscosity should be. This can be guessed initially using a short test integration of the simulation, but it is difficult to know how this will evolve over long integrations. Guess too large a linear viscosity and solution accuracy is degraded, guess too small a viscosity and later timesteps will be diffusively limited. So really what we would want to do is be able to occasionally update the linear viscosity value on the LHS based on how the simulation is evolving. My understanding is that this is not currently supported in Dedalus (for example we couldn't update a field variable that is on the LHS). Is this correct Keaton? The second question then is whether we could force a recalculation of the pencil? Even though this is slow, it could still lead to a net improvement in speed. For instance I have a simulation that has been running for several days that is now taking 1 second timesteps, because of the RHS diffusion term, even though the CFL condition is approximately an order of magnitude larger.

Cheers! Jacob

On Tue, Feb 19, 2019 at 5:02 AM Gregory L. Wagner notifications@github.com wrote:

Ah yes, the joys of over-resolved DNS!!

But ok, I see the subtlety of your point. For under-resolved LES, we probably do want to track the smallest modes in the box.

The question is whether the numerical eigenmodes are spurious. For the discrete Laplacian we know the ultra-fast eigenmodes to be spurious. For eddy viscosity LES closures, we haven't a clue as to the nature of the linearized, discrete eigenmodes. Yet, given the similarity of the eddy viscosity operator to the discrete Laplacian it is reasonable to suspect that something similar can occur, especially if, for example, the eddy viscosity is somehow nearly constant.

Perhaps some tests in practical cases will shed light on this issue...

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/glwagner/dedaLES/issues/16#issuecomment-465119956, or mute the thread https://github.com/notifications/unsubscribe-auth/AGIDJyIT0V1hLn4bTi9Xj2LoTsxfoxL_ks5vO_X9gaJpZM4a8Wx5 .

glwagner commented 5 years ago

@wenegrat we could design a tool that re-instantiates a model automatically with new arguments (for example, a new closure with a different splitting viscosity) based on some arbitrary criterion in dedaLES --- without modifying dedalus. I think this would provide the functionality you're after.

It may even be possible to integrate this into the the solver.ok boolean by subclassing the dedalus solver. But a nice place to start could be to code up this idea in a script.

wenegrat commented 5 years ago

Interesting idea. Let me play around with the way you've implemented the diffusive time step checker. I think this is a smarter way than I did, which was based on minimum grid size and maximum diffusivities. Some back of the envelope calculations I just did suggest I might be able to get the speed-up I'm after just by fixing this alone. Cheers, Jacob

On Wed, Feb 20, 2019 at 11:11 AM Gregory L. Wagner notifications@github.com wrote:

@wenegrat https://github.com/wenegrat we could design a tool that re-instantiates a model automatically with new arguments (for example, a new closure with a different splitting viscosity) based on some arbitrary criterion in dedaLES --- without modifying dedalus. I think this would provide the functionality you're after.

It may even be possible to integrate this into the the solver.ok boolean by subclassing the dedalus solver; however, that's not strictly necessary.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/glwagner/dedaLES/issues/16#issuecomment-465714453, or mute the thread https://github.com/notifications/unsubscribe-auth/AGIDJ-H46px5zHZrOsckdyM2AdLJgHuKks5vPZ38gaJpZM4a8Wx5 .

glwagner commented 5 years ago

Thinking about the time-step limitation for diffusion also might explain why Constant Smagorinsky is less stable than AMD: while AMD should have zero eddy viscosity/diffusivity near the wall where the Chebyshev resolution is finest, the Constant Smagorinsky will have some finite value which could impose a stricter limitation on the time-step.

To rephrase @wenegrat's first comment --- if we regard the total 'error' as the deviation of the numerical LES solution from a hypothetical 'exact' Navier-Stokes solution, we may expect that in some applications the LES model itself is the largest source of error, so that we aren't concerned with numerical error and over/under damping of high modes associated with too-large time-steps.

It occurs to me that we can also calculate a time-step limitation by adding the sub grid diffusive flux as a 'velocity' in the CFL calculator. Have you tried that @wenegrat ?

kburns commented 5 years ago

@wenegrat -- do you know any references where they implicitly integrate the closure terms? This would surprise me a little bit since those terms are nonlinear.

As for modifying the linear terms -- yes this can be done by just modifying the values of the parameter fields and recalling "build_matrices(solver, problem, ['M', 'L'])", where the "build_matrices" function is from "dedalus.core.pencil". It's not cheap (rebuilds all the matrices) -- but no need to build a new problem or solver.

kburns commented 5 years ago

I'll add that things would go badly if you used a multistep time stepper for this, but it should be fine with the Runge Kutta time steppers.

wenegrat commented 5 years ago

Hi Keaton,

Sorry about the delay. One approach which I've seen used is to implicitly integrate the wall-normal direction, as often this direction has high resolution requirements to resolve near wall layers. This is discussed briefly in: https://arc.aiaa.org/doi/pdf/10.2514/6.2013-3094, and references therein (for example https://searchworks.stanford.edu/view/3048953).

For an oceanographic example you can see the Diablo code, versions of which are used by John Taylor and others quite extensively: https://github.com/johnryantaylor/DIABLO

My interpretation of the DIABLO code available on the github repository is that the turbulent viscosity is being calculated once at the beginning of each timestep, which linearizes the turbulent diffusion term, making implicit integration possible. The caveats to this are 1) I could be misinterpreting the code :) 2) this particular version of the Diablo code is aimed at simplicity so it's possible that other version do not do this, and 3) I have no sense of how common/uncommon this is in other codes.

Jacob

On Wed, Feb 20, 2019 at 1:21 PM Keaton J. Burns notifications@github.com wrote:

@wenegrat https://github.com/wenegrat -- do you know any references where they implicitly integrate the closure terms? This would surprise me a little bit since those terms are nonlinear.

As for modifying the linear terms -- yes this can be done by just modifying the values of the parameter fields and recalling "build_matrices(solver, problem, ['M', 'L'])", where the "build_matrices" function is from "dedalus.core.pencil". It's not cheap (rebuilds all the matrices) -- but no need to build a new problem or solver.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/glwagner/dedaLES/issues/16#issuecomment-465758808, or mute the thread https://github.com/notifications/unsubscribe-auth/AGIDJyE8XFN4b-K86Ze_1GiC3gCYVWM4ks5vPbw8gaJpZM4a8Wx5 .