idaholab / moose

Multiphysics Object Oriented Simulation Environment
https://www.mooseframework.org
GNU Lesser General Public License v2.1
1.66k stars 1.02k forks source link

Build team for RDG enhancements #12118

Closed WilkAndy closed 5 years ago

WilkAndy commented 5 years ago

Description

RDG offers the possibility of modelling advection without the excessive artificial numerical diffusion present in current implementations within MOOSE. Development of RDG within MOOSE will focus on framework enhancements to make RDG easier, enhancing the current RDG module, implementing RDG in the PorousFlow module, and potentially other enhancements depending on the interests of the team. We plan to start on October 1 and finish by Christmas, including code, tests, doco and examples.

This issue is just a convenient place for people to register their interest to be part of the team. It will be closed when we start the work.

What to do

If you are interested in being involved in any capacity, please add your github handle below.

People who want to code C++ are welcome (of course!) both on the framework side, and on the physics module side, but just as important are people who can contribute to documentation and tests. Tests could be code comparisons, comparisons with analytic solutions, as well as full-blown real-life examples. The amount of time you donate is up to you.

WilkAndy commented 5 years ago

Including @permcody , @yidongxiainl , @andrsd , @joshuahansel , @mrodb , @rpodgorney

lingzou commented 5 years ago

Please add me in.

cpgr commented 5 years ago

I probably won't have much time to help until November, but let me know if there is anything I can help with

andrsd commented 5 years ago

Here is some insight on how we do things in the rDG module (by no means comprehensible - I will add to this as needed):

BCs

There are 2 pieces of code that work together. An IntegratedBC-derived class and an BoundaryFluxBase-derived user object. The user object currently provides 2 pure virtuals; getFlux and getJacobian. The BC class calls either one from corresponding computeQp{Residual|Jacobian}. On the first call the flux gets computed and stored in a cache, so that it is not recomputed every time in case we deal with vector-values flux (the module shows an example on a scalar equation).

There is a limitation on this design: If you need some extra values for compute the flux, the only way is to pass it as a part of you solution vector - it is the 3rd parameter in getFlux, getJacobian. I think I want to try to change this design, so that the flux is actually computed as a Material. We would provide a base class that would create the material properties for storing the flux and jacobian and would call some virtual method that people would have to implement. This should allow people to bring pretty much any value they needed in the flux computation.

Internal Side Fluxes

The same design idea as for BCs, except we pass 2 solution vectors into getFlux and getJacobian. And instead of IntegratedBC we get the flux from DGKernel. The same idea with caching to avoid recomputing the flux over and over. Originally this was not possible to do in another way, because we were missing InterfaceMaterial. That's currently being in progress (I am helping with that) and I hope it will be done by the end of the month. We would use the same idea I described for BCs.

The downside of the general user object approach (besides the limited access to variable values) is that we have to take care of threading. Like when one DGKernel is on a different element/side pair and the other DGKernel needs fluxes. So there is some locking behind the scenes. The Material approach should fix that. Also, there is some weird interplay with execute_on. If you get it wrong (and most people will for the first time), the code fails with errors that values are not cached.

2 loop problem

This is the biggest problem with rDG. In multi-D, there has to be 2 sweeps through the mesh. This is currently solved with ElementLoopUserObject which is a GeneralUserObject and will do the extra loop and will cache the values for the loop forming the residual. I personally do not like the solution. I need to think about how to do this in a better way. I have ideas, but they all require a lot of MOOSE internal coding, so for the time being, I think we would stick with this approach. This is just a heads-up so you guys know that something major might and probably will have to change in future and it will have big impact on input files.

YaqiWang commented 5 years ago

I think the 2 loop problem possibly can be solved by deriving a new FEProblem. The new FEProblem can also hold new data that can be used by rDG bcs, etc.

rpodgorney commented 5 years ago

I won't be working on the coding per se, but I will gladly prepare some tests and help with documentation.

Have we decided where to house rDG? Should it be its own module, or should we package it under porous_flow?

Robert Podgorney, PhD, PG Computational Geosciences Group Idaho National Laboratory

Office: 208-526-1524 Cell: 208-520-9361 email: robert.podgorney@inl.govmailto:robert.podgorney@inl.gov


From: David Andrs notifications@github.com Sent: Thursday, September 13, 2018 7:28:43 AM To: idaholab/moose Cc: Robert K. Podgorney; Mention Subject: Re: [idaholab/moose] Build team for RDG enhancements (#12118)

Here is some insight on how we do things in the rDG module (by no means comprehensible - I will add to this as needed):

BCs

There are 2 pieces of code that work together. An IntegratedBC-derived class and an BoundaryFluxBase-derived user object. The user object currently provides 2 pure virtuals; getFlux and getJacobian. The BC class calls either one from corresponding computeQp{Residual|Jacobian}. On the first call the flux gets computed and stored in a cache, so that it is not recomputed every time in case we deal with vector-values flux (the module shows an example on a scalar equation).

There is a limitation on this design: If you need some extra values for compute the flux, the only way is to pass it as a part of you solution vector - it is the 3rd parameter in getFlux, getJacobian. I think I want to try to change this design, so that the flux is actually computed as a Material. We would provide a base class that would create the material properties for storing the flux and jacobian and would call some virtual method that people would have to implement. This should allow people to bring pretty much any value they needed in the flux computation.

Internal Side Fluxes

The same design idea as for BCs, except we pass 2 solution vectors into getFlux and getJacobian. And instead of IntegratedBC we get the flux from DGKernel. The same idea with caching to avoid recomputing the flux over and over. Originally this was not possible to do in another way, because we were missing InterfaceMaterial. That's currently being in progress (I am helping with that) and I hope it will be done by the end of the month. We would use the same idea I described for BCs.

The downside of the general user object approach (besides the limited access to variable values) is that we have to take care of threading. Like when one DGKernel is on a different element/side pair and the other DGKernel needs fluxes. So there is some locking behind the scenes. The Material approach should fix that. Also, there is some weird interplay with execute_on. If you get it wrong (and most people will for the first time), the code fails with errors that values are not cached.

2 loop problem

This is the biggest problem with rDG. In multi-D, there has to be 2 sweeps through the mesh. This is currently solved with ElementLoopUserObject which is a GeneralUserObject and will do the extra loop and will cache the values for the loop forming the residual. I personally do not like the solution. I need to think about how to do this in a better way. I have ideas, but they all require a lot of MOOSE internal coding, so for the time being, I think we would stick with this approach. This is just a heads-up so you guys know that something major might and probably will have to change in future and it will have big impact on input files.

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fidaholab%2Fmoose%2Fissues%2F12118%23issuecomment-421006851&data=02%7C01%7Crobert.podgorney%40inl.gov%7C1a544ab0ce4c4734286a08d6197cd85e%7C4cf464b7869a42368da2a98566485554%7C0%7C0%7C636724421324333685&sdata=mkrPeB8T%2BblDriAr%2FcVT0McqYMk4lfcBwXXDtCE6Qdo%3D&reserved=0, or mute the threadhttps://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAKxhz7v0tvG89W7qkdnNKYFT7Kkqncinks5ual2LgaJpZM4WmOQb&data=02%7C01%7Crobert.podgorney%40inl.gov%7C1a544ab0ce4c4734286a08d6197cd85e%7C4cf464b7869a42368da2a98566485554%7C0%7C0%7C636724421324333685&sdata=6GIbC2lyZIYLuBzCADzYOGJJLwW4PVKSYd48k4nmzTQ%3D&reserved=0.

andrsd commented 5 years ago

I think the 2 loop problem possibly can be solved by deriving a new FEProblem. The new FEProblem can also hold new data that can be used by rDG bcs, etc.

Yes, that's one of the designs I was exploring, but did not get far enough because of other work. I will certainly try that again.

andrsd commented 5 years ago

Have we decided where to house rDG? Should it be its own module, or should we package it under porous_flow?

There is already rDG module in MOOSE. Your porous_flow specific classes should be in porous_flow module. We just need to make rdg module a dependency of porous_flow module.

yidongxiainl commented 5 years ago

Hey guys, while you are discussing the coding plan, I'll leave a note on some further in-depth questions. First, will we be able to do mixed 2D/3D simulations with constant monomial (i.e. cell-averaged solution)? In rDG for CFD, we do not care about this feature as no mixed 2D/3D is needed. But for subsurface, this is a much needed feature for representing fractures easily. Recall from previous talk, I think we were not able to do it, because the degrees of freedom are defined within each element. Second, I observed some issue when doing tracer test with rDG earlier this year, in a case that permeability differs hugely between fractured planes and the surrounding rock matrix. The simulation converges, but not to the expected right solution. I think this is a more advanced thing to be discussed with Andy later on.

Sorry I was not responding to much in this practice and I was overly occupied by other project deliverables and milestones. But I'll put more effort to communicate with you guys as time approaches to the end of September.

WilkAndy commented 5 years ago

@andrsd - "comprehensible" or "comprehensive" in your first sentence? i'm thinking the former is appropriate to a novice like me, but you probably meant the latter, heheheh. I like the idea of a Material. Our flux will be the Darcy flux, which is easily computed. I'm not sure of the Jacobian: our flux will have derivatives with respect to all sorts of Variables, and at the moment the PorousFlow architecture is such that we don't need to know what these Variables are going to be until runtime. So we build std::vectors which contain the derivative information, based on what we find in the input file. This is so people can use (P, S) or (Pliquid, Pgas), or (enthalpy, S), etc, ... as their Variables and still use the same Kernels and Materials while experimenting with different Variable choices.

@yidongxiainl - re the incorrect results. Is it the same as the "Retardation of the flow by the matrix" at the bottom of

http://mooseframework.org/moose/modules/porous_flow/flow_through_fractured_media.html

WilkAndy commented 5 years ago

By the way, i'm on holidays for much of the remainder of September. So if i don't respond here it's not because i'm not keen!

andrsd commented 5 years ago

@WilkAndy The Jacobian stores only entries w.r.t. the variables that are part of the flux vector. We currently build a map from the variable number to the component number and use that to index into the Jacobian matrix. It is not the most user-friendly code, but I think the base class can provide some API to make this easier.

WilkAndy commented 5 years ago

Hi everyone,

I'd like to get started on this, and am wondering what our first task(s) should be. My feeling is that implementing the Flux+Jacobian in a Material is a good start. I'm less concerned about the 2-loop problem and the mixed-dimensional problem, because i just want to see something working. If this rDG doesn't work for some reason for PorousFlow then i'm not super excited to work on optimising (2-loop) or generalising (mixed-dim).

I don't know anything much about rDG, so have some questions.

  1. Are we planning to just do P0P1, as advocated by Yidong in https://link.springer.com/article/10.1007/s00603-016-0951-y ? Or is our approach going to be more general?

  2. Is P0P1 equivalent to constant-monomial + TVD ? This is suggested by https://www.sciencedirect.com/science/article/pii/S0045793009001030 . Is that why i don't see any reconstruction in the current rDG module?

  3. The current single test in rDG uses explicit time integration (http://mooseframework.org/modules/rdg/index.html). Why? I will need implicit.

  4. My equations are not just the advection equation, and are more along the lines of https://www.sciencedirect.com/science/article/pii/S0045793017301536 . That is, something like 0 = dM/dt + grad(F). Here M is a nonlinear function of Variables, and F is a nonlinear function of Variables and their gradients: nonlinear and coupled advection and diffusion. What's our proposal for solving such an equation? Some Variables be constant-monomial, and others linear-Lagrange, and construct the gradients of the constant-monomial using...?

Can you experienced people answer these questions and can we try to design some tasks (github issues?) on which we can work?

philippschaedle commented 5 years ago

@WilkAndy Thanks for putting this together, I am looking forward to this effort. I am quite busy during October but happy to work on tests and real live examples afterwards. I could also do some coding on the physical side if that makes sense.

@yidongxiainl re to your incorrect results. Have you been successful solving the fracture transport issue? I was working on the same thing and so far could not get correct results.

WilkAndy commented 5 years ago

Thansk @philippschaedle . We look forward to your input in Nov and Dec. I think i might be doing mostly testing + doco too :-)

andrsd commented 5 years ago

My feeling is that implementing the Flux+Jacobian in a Material is a good start.

InterfaceMaterial is not ready yet, so if you want to jump into coding, you will have to use the user-object approach. We talked about this yesterday and, what we plan on doing is to add the material approach as an alternative when all the needed things are in place. Then, migrate onto it. We already have an ongoing effort to implement 2-phase flow with rDG and switching to the material approach would throw a monkey wrench into that. So we want to finish that first and make sure it works. And unfortunately, we (or at least I) have other things to work on which have much higher priority :-(

The major problem with the material approach is that we are currently unable to resolve material dependencies on internal edges - I think there is an issue opened for it. I am talking about a case where one material would use the current and neighbor values to compute some other material property which would be then picked up by another material.

2. why i don't see any reconstruction in the current rDG module?

It is there. The base class is SlopeReconstructionBase and 2 child classes SlopeReconstructionOneD and SlopeReconstructionMultiD. It is just that the test is not using that - probably for simplicity.

3. I will need implicit

Implicit is definitely possible. There is not a test for it really.

Also, keep in mind that the module is quite crude right now, so there will be rough edges to deal with.

permcody commented 5 years ago

It's been quite awhile since @yidongxiainl originally implemented parts of that module and several new features have been added to MOOSE since that time. Notably the "Actually Explicit" capability and "RelationshipManagers". I can't remember if Yidong's implementation required a wider stencil of information or not. Do we need just one element neighbor's information for each element calculation or more? If just one for now (which I think is all we need) is there a potential that the stencil may expand for different types of problems in the future? Also, it seems like Yidong was reiniting neighboring elements so that he could get information off from them in a single pass. I remember we talked about a caching system at some point but that'll all play into how we go about deciding how to handle the 2 pass system (if that's what we end up doing or not).

I'd like to see if @friedmud or @rwcarlsen have any interesting ideas here.

yidongxiainl commented 5 years ago

Hi everyone,

I'd like to get started on this, and am wondering what our first task(s) should be. My feeling is that implementing the Flux+Jacobian in a Material is a good start. I'm less concerned about the 2-loop problem and the mixed-dimensional problem, because i just want to see something working. If this rDG doesn't work for some reason for PorousFlow then i'm not super excited to work on optimising (2-loop) or generalising (mixed-dim).

I don't know anything much about rDG, so have some questions.

  1. Are we planning to just do P0P1, as advocated by Yidong in https://link.springer.com/article/10.1007/s00603-016-0951-y ? Or is our approach going to be more general?

Yes, P0P1 is nothing but second-order, cell-centered finite volume method here.

  1. Is P0P1 equivalent to constant-monomial + TVD ? This is suggested by https://www.sciencedirect.com/science/article/pii/S0045793009001030 . Is that why i don't see any reconstruction in the current rDG module?

There is reconstruction base kernel for providing the data structures needed, but not implementing of specific variables at the level of rDG module. Last year Rich ordered the P0P1 implementation of the multi-dimensional Navier-Stokes equations be moved out of navies_stokes module and back in-house. So at this moment you did not get a chance to see a good example.

But for 1-D, in the rDG module, I implemented TVD for scalar transport equation. There is test case.

  1. The current single test in rDG uses explicit time integration (http://mooseframework.org/modules/rdg/index.html). Why? I will need implicit.

For a pure convective transport, there is no need to have implicit, thought it also works. For porous flow and heat transport, of course we do implicit. The only additional thing for implicit is to calculate the Jacobians.

  1. My equations are not just the advection equation, and are more along the lines of https://www.sciencedirect.com/science/article/pii/S0045793017301536 . That is, something like 0 = dM/dt + grad(F). Here M is a nonlinear function of Variables, and F is a nonlinear function of Variables and their gradients: nonlinear and coupled advection and diffusion. What's our proposal for solving such an equation? Some Variables be constant-monomial, and others linear-Lagrange, and construct the gradients of the constant-monomial using...?

For P0P1, when we calculate Jacobians, we always only consider the P0 variables, i.e. constant-monomial. Here I really am tired of saying constant-monomial, as it was not part of my CFD training. In CFD language, it is just cell-average variable.

Can you experienced people answer these questions and can we try to design some tasks (github issues?) on which we can work?

yidongxiainl commented 5 years ago

It's been quite awhile since @yidongxiainl originally implemented parts of that module and several new features have been added to MOOSE since that time. Notably the "Actually Explicit" capability and "RelationshipManagers". I can't remember if Yidong's implementation required a wider stencil of information or not. Do we need just one element neighbor's information for each element calculation or more? If just one for now (which I think is all we need) is there a potential that the stencil may expand for different types of problems in the future? Also, it seems like Yidong was reiniting neighboring elements so that he could get information off from them in a single pass. I remember we talked about a caching system at some point but that'll all play into how we go about deciding how to handle the 2 pass system (if that's what we end up doing or not).

I'd like to see if @friedmud or @rwcarlsen have any interesting ideas here.

For second-order, cell-centered finite volume method, we require only a 'compact stencil' during reconstruction and flux calculation, i.e. in 3D, the immediate neighboring elements that share faces with an element.

In the CFD world, people can choose a larger stencil, e.g. in addition to the face-neighboring elements, people can also include node-neighboring elements. But in production-quality codes, only 'compact stencil' is used, as it best minimize the costs in MPI and load imbalance.

joshuahansel commented 5 years ago

Is P0P1 equivalent to constant-monomial + TVD ?

To clarify earlier responses to this question, I believe in general “P0P1” does not imply TVD; TVD is an independent property of the scheme, determined by the numerical flux approximation and limitation during the reconstruction. “P0P1” should only tell you that the solution is constant-monomial reconstructed to linear.

-Josh

From: Andy Wilkins notifications@github.com Reply-To: idaholab/moose reply@reply.github.com Date: Monday, October 1, 2018 at 10:22 PM To: idaholab/moose moose@noreply.github.com Cc: "Joshua E. Hansel" joshua.hansel@inl.gov, Mention mention@noreply.github.com Subject: Re: [idaholab/moose] Build team for RDG enhancements (#12118)

Hi everyone,

I'd like to get started on this, and am wondering what our first task(s) should be. My feeling is that implementing the Flux+Jacobian in a Material is a good start. I'm less concerned about the 2-loop problem and the mixed-dimensional problem, because i just want to see something working. If this rDG doesn't work for some reason for PorousFlow then i'm not super excited to work on optimising (2-loop) or generalising (mixed-dim).

I don't know anything much about rDG, so have some questions.

  1. Are we planning to just do P0P1, as advocated by Yidong in https://link.springer.com/article/10.1007/s00603-016-0951-yhttps://urldefense.proofpoint.com/v2/url?u=https-3A__link.springer.com_article_10.1007_s00603-2D016-2D0951-2Dy&d=DwMFaQ&c=54IZrppPQZKX9mLzcGdPfFD1hxrcB__aEkJFOKJFd00&r=xTMJ9Q3PQVWWIpr9UWvnvWPrbmJOt-aOIf2FbRLBNpc&m=g47pDa4NJmK62qAY8bVyIHHB4yLMaaOpcmrFbW0KsiE&s=79HPPwb--kKNQfCKszIqpOSZfstuTRq2TN8j2r3ke9M&e= ? Or is our approach going to be more general?
  2. Is P0P1 equivalent to constant-monomial + TVD ? This is suggested by https://www.sciencedirect.com/science/article/pii/S0045793009001030https://urldefense.proofpoint.com/v2/url?u=https-3A__www.sciencedirect.com_science_article_pii_S0045793009001030&d=DwMFaQ&c=54IZrppPQZKX9mLzcGdPfFD1hxrcB__aEkJFOKJFd00&r=xTMJ9Q3PQVWWIpr9UWvnvWPrbmJOt-aOIf2FbRLBNpc&m=g47pDa4NJmK62qAY8bVyIHHB4yLMaaOpcmrFbW0KsiE&s=R8JtO-qtwli3SxSxWBiwMnwsIdKLv_6xlZjmMyq6Pck&e= . Is that why i don't see any reconstruction in the current rDG module?
  3. The current single test in rDG uses explicit time integration (http://mooseframework.org/modules/rdg/index.htmlhttps://urldefense.proofpoint.com/v2/url?u=http-3A__mooseframework.org_modules_rdg_index.html&d=DwMFaQ&c=54IZrppPQZKX9mLzcGdPfFD1hxrcB__aEkJFOKJFd00&r=xTMJ9Q3PQVWWIpr9UWvnvWPrbmJOt-aOIf2FbRLBNpc&m=g47pDa4NJmK62qAY8bVyIHHB4yLMaaOpcmrFbW0KsiE&s=GNqYh3egkDw9JFfHB_CV7cr56aGqxW6UELiI675nvxM&e=). Why? I will need implicit.
  4. My equations are not just the advection equation, and are more along the lines of https://www.sciencedirect.com/science/article/pii/S0045793017301536https://urldefense.proofpoint.com/v2/url?u=https-3A__www.sciencedirect.com_science_article_pii_S0045793017301536&d=DwMFaQ&c=54IZrppPQZKX9mLzcGdPfFD1hxrcB__aEkJFOKJFd00&r=xTMJ9Q3PQVWWIpr9UWvnvWPrbmJOt-aOIf2FbRLBNpc&m=g47pDa4NJmK62qAY8bVyIHHB4yLMaaOpcmrFbW0KsiE&s=NjkL7skmOyKp3BWa8sFrQw7_Yhtw7_UPdsfIeoicorc&e= . That is, something like 0 = dM/dt + grad(F). Here M is a nonlinear function of Variables, and F is a nonlinear function of Variables and their gradients: nonlinear and coupled advection and diffusion. What's our proposal for solving such an equation? Some Variables be constant-monomial, and others linear-Lagrange, and construct the gradients of the constant-monomial using...?

Can you experienced people answer these questions and can we try to design some tasks (github issues?) on which we can work?

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHubhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_idaholab_moose_issues_12118-23issuecomment-2D426144676&d=DwMFaQ&c=54IZrppPQZKX9mLzcGdPfFD1hxrcB__aEkJFOKJFd00&r=xTMJ9Q3PQVWWIpr9UWvnvWPrbmJOt-aOIf2FbRLBNpc&m=g47pDa4NJmK62qAY8bVyIHHB4yLMaaOpcmrFbW0KsiE&s=7ttc-MzzBXrvi9aruWDw2TSsGxVg_ZPcVn-cDbPEGVw&e=, or mute the threadhttps://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_notifications_unsubscribe-2Dauth_AWCcRGkiWUI4AGTlpldlSXCpC6p0Xhdtks5ugun5gaJpZM4WmOQb&d=DwMFaQ&c=54IZrppPQZKX9mLzcGdPfFD1hxrcB__aEkJFOKJFd00&r=xTMJ9Q3PQVWWIpr9UWvnvWPrbmJOt-aOIf2FbRLBNpc&m=g47pDa4NJmK62qAY8bVyIHHB4yLMaaOpcmrFbW0KsiE&s=Q5SXZaq1FgvsCDUjCD4BbtAItpSRgA0qA_-uj5FStR4&e=.

yidongxiainl commented 5 years ago

Is P0P1 equivalent to constant-monomial + TVD ? To clarify earlier responses to this question, I believe in general “P0P1” does not imply TVD; TVD is an independent property of the scheme, determined by the numerical flux approximation and limitation during the reconstruction. “P0P1” should only tell you that the solution is constant-monomial reconstructed to linear.

No, P0P1 is not equivalent to constant-monomial + TVD. I'd like to confirm Joshua's understanding. A TVD scheme only exists for 1D, or 2D and 3D structured orthogonal grids in each direction. For generic unstructured grids in 2D and 3D, TVD does not exist. Note that we do not have to satisfy TVD in multi-dimensional unstructured grids.

WilkAndy commented 5 years ago

I don't want to side-track this issue, but perhaps unstructured-grid TVD is available, as in:

https://dl.acm.org/citation.cfm?id=1027300

Tell me what you think, ie, educate me!

WilkAndy commented 5 years ago

So far no one has suggested tasks. I think my first task will be to create some doco explaining to myself how RDG P0P1 works in gory detail, since it seems we're going to focus on P0P1.

yidongxiainl commented 5 years ago

Andy, I know that is Kuzmin’s work. His method is kind of Flux Corrected Transport, and uses a larger stencil, but not compact stencil. In my earlier explanation, I would add that no TVD is possible with compact stencil in multi-D unstructured grids.

yidongxiainl commented 5 years ago

I don't want to side-track this issue, but perhaps unstructured-grid TVD is available, as in:

https://dl.acm.org/citation.cfm?id=1027300

Tell me what you think, ie, educate me!

@WilkAndy Kuzmin's all works are based on FEM-FCT (Flux Corrected Transport), including the paper you mentioned. They are belong to node-based FEM. Rich was once attempting to get FEM-FCT into MOOSE, which never actually happened due to many limitation of data structure back then. I have no idea if today it is still an option @joshuahansel Nevertheless, in this context we are talking about cell-centered scheme.

@andrsd Is it possible Andy could have access to Bighorn repository? The best way for him to familiarize how P0P1 works in multi-D is Bighorn, where I implemented the multi-D Euler equations (i.e. the Navier-Stokes equations neglecting the viscous terms).

Meantime, I think I can send @WilkAndy a Bighorn documentation for him to learn P0P1 in detail.

yidongxiainl commented 5 years ago

xia2016bighorn.pdf

@WilkAndy The Bighorn report is an 'unlimited release' document. So I feel Ok to attach it here for your use. Please feel free to raise questions regarding the report.

WilkAndy commented 5 years ago

Aha, thanks for the comments re Kuzmin's FCT, Yidong.

Great bighorn documentation - thanks for that (i've only skimmed it so far).

joshuahansel commented 5 years ago

I have no idea if today [FEM-FCT] is still an option

It may be an option, but as of right now, unless RDG proves to be unable to meet our needs in RELAP-7, FEM-FCT will not be implemented anytime soon (at least not for RELAP-7). FEM-FCT might be difficult to implement in MOOSE. Per time-step, it requires:

  1. A low-order solve (lumped mass matrix, low-order artificial viscosity).
  2. A high-order solve (consistent mass matrix, high-order or no artificial viscosity).
  3. Decomposition into anti-diffusive fluxes (one per neighbor for a given node - all are typically stored in a matrix). In general, these depend on the high-order and low-order solutions, as well as the artificial diffusion terms and mass matrix.
  4. Application of limited anti-diffusive fluxes to the low-order solution.

It might be possible, though I currently do not know how it would be done, due to the multiple solve per time step requirement. It might just require a lot of additional work.

hsheldon commented 5 years ago

Happy to help when it comes to testing

WilkAndy commented 5 years ago

Hi @joshuahansel , @permcody and @andrsd (and any other interested party),

I've been thinking a lot about the RDG, and also the Kuzmin-Turek FEM-FCT stuff. I believe that it's worth trying the latter in the case of PorousFlow. Slightly differently to @joshuahansel 's summary above, i think that to implement Kuzmin-Turek, i need to:

  1. Evaluate a residual in the usual way (in KT parlance this is evaluating the advection term and then adding artificial diffusion to prevent overshooting)
  2. Examine the residual at the nodes, and compute the flux correction (this controls the adding of anti-diffusion to the residual in regions where the artificial diffusion wasn't really necessary).
  3. Based on the results of 1 and 2, re-build the residual at each node by adding the result of 1 with the antidiffusion mentioned in 2.

At this time, it's not super critical that this be beautiful code: i just want something that does Kuzmin-Turek in order to evaluate the performance in a number of scenarios. It doesn't have to be fast.

My question is: how would you implement the above in MOOSE?

joshuahansel commented 5 years ago

Is there a paper which describes the FCT method that you're referring to? Is it the one you linked to above?: https://www.sciencedirect.com/science/article/pii/S0021999104000221

I'm a bit uncertain about your description of examining the residual - are you referring to the nonlinear residual? I'm not sure what you could do with the residual alone, since that will just tell you how close you are to solving the nonlinear system. I believe you need to have the operators decomposed from the solution to compute the anti-diffusive fluxes.

joshuahansel commented 5 years ago

I've attached an FCT presentation I gave a few years ago: presentation.pdf

I did it for scalar radiation transport, and Kuzmin's work was my basis. The main difference is that we had high-order diffusion (entropy viscosity) instead of no artificial diffusion for the high-order scheme. If you're trying to do something different than this algorithm, then I am unfamiliar with it.

WilkAndy commented 5 years ago

I'm shifting the Kuzmin stuff to #12370 as it's a diversion from the RDG.

WilkAndy commented 5 years ago

Closing this issue: I have built Kuzmin-Turek flux-limiting TVD advection into PorousFlow, and this is almost identical to RDG(P0P1) except it operates using nodal information rather than cell-centered information, so is more suitable in PorousFlow.