SciML / PDEBase.jl

Common types and interface for discretizers of ModelingToolkit PDESystems.
MIT License
12 stars 4 forks source link

A request for comment - Symbolic PDE Frontend #8

Open xtalax opened 1 year ago

xtalax commented 1 year ago

Calling all lurkers, yes you 👀.

Chances are if you are reading this you are interested in solving PDEs - we want to hear your ideas for a common base either here or in an issue.

What information do you need to discretize a system of PDEs, mathematically defined with ModelingToolkit.jl, in your package of choice, returning a SciMLSystem with symbolic_discretize, or SciMLProblem with discretize, or both?

We have a system for recognizing location and type of boundary condition, and extracting independent and dependent variables from the equations. In MethodOfLines.jl and (soon) NeuralPDE.jl we are using Symbolics and SymbolicUtils as an IR, see here for information on how to use rewrite rules, and here for how a simple discretizer is implemented in MOL. We want to expand this common parsing to get enough information to make it easy to discretize symbolic systems in the package of your choice.

We are hoping to collect a case study of how to write a discretizer to include in the docs for others to learn. If we can align on a common frontend language and parser (optional in stages), and common benchmarking based on PDESystemLibrary, we can massively improve the state of solving PDEs in Julia.

But what IR/form do you need for the main equations? What about domain types? What is important to you? Any ideas please let us know.

Any questions about the current state of the SciML PDEs ecosystem also welcome.

xtalax commented 1 year ago

A SciML discretizer package implements the following:

DanielVandH commented 1 year ago

Replying to https://github.com/DanielVandH/FiniteVolumeMethod.jl/issues/29. I discuss a lot of this at https://danielvandh.github.io/FiniteVolumeMethod.jl/stable/interface/ and https://danielvandh.github.io/FiniteVolumeMethod.jl/stable/math/.

A high-level overview is as follows:

$$ \frac{\mathrm d}{\mathrm dt}\iint_{\Omegai} u\,\mathrm dV + \sum{\sigma \in \mathcal Ei} \int\sigma \boldsymbol q \cdot \boldsymbol{\hat n}{i, \sigma}\,\mathrm ds = \iint{\Omega_i} R\,\mathrm dV, $$

where $\Omega_i$ is the $i$th control volume. The only approximations I make from here are:

  1. The line integrals are approximated by a midpoint rule,

$$ \int\sigma \boldsymbol q \cdot \hat{\boldsymbol n}{i, \sigma}\,\mathrm ds \approx [\boldsymbol q(x\sigma, y\sigma, t, u) \cdot \boldsymbol{\hat n}{i, \sigma}]L\sigma; $$

  1. The integrals $\iint_{\Omegai} u \mathrm dV$ and $\iint{\Omega_i} R\,\mathrm dV$ are replaced by $V_i u_i$ and $V_i R_i$, respectively, where $u_i$ is the value of $u$ at the $i$th node (corresponding to the node that defines $\Omega_i$), and similarly for $R_i$.

  2. The function $\boldsymbol q(x\sigma, y\sigma, t, u)$ also needs to be approximated, since this could contain e.g. gradients. For this purpose, a linear interpolant is used, replacing any instance of $u$ with $\alpha x + \beta y + \gamma$ (with $\alpha, \beta, \gamma$ estimated over $\Omega_i$; see here). With this setup, the actual interface I used for defining $\boldsymbol q$ is through functions of the form flux(x, y, t, α, β, γ, p), so that any instance of u is replaced with αx + βy + γ, and e.g. gradients $\boldsymbol \nabla u(x, y, t)$ become $(\alpha, \beta)^T$.

So, the domain types are supposed to be any; the main equation just needs to be provided with a function flux(x, y, t, α, β, γ, p) for $\boldsymbol q$ and R(x, y, t, u, p) for the source term; the boundary conditions are provided with functions of the form f(x, y, t, u, p) together with a label for the boundary condition type, given in an order corresponding to the boundary order); line integrals are approximated with midpoint rules while volume integrals are replaced with the volume of the integral times the integrand's value at the center; $u$ is represented as a linear function across each linear interpolant.

How's that?

xtalax commented 1 year ago

That sounds good.

So the way we could go about this is:

  1. Parse the bc equations in to these atomic types and generate the functions, we are already most of the way there with that here.
  2. Convert the equations to the required form, throwing if unsupported.
  3. Generate the mesh from the supplied domain, given some information from the user in the discretization about density/number of points.
  4. Parse the equation with rules and generate the required functions.
  5. Discretize u0 on to the mesh.
  6. Construct and return the problem.
DanielVandH commented 1 year ago

That would be good. FYI, the package uses a vertex-centred discretisation, but some other people do something like a cell-centred method, though I'm not sure that changes any of these steps (maybe it's just buried in the details of 2).

With this type of scheme, would it be simple to do something like the following (this is adaptive mesh refinement with a posteriori error estimators):

I'd be a bit concerned with the time it might take to complete 3--6 to implement this approach, since we'd need to keep going back to step 2. This isn't a necessary feature of course, but just something worth considering (depending on the scope of this project, anyway).

xtalax commented 1 year ago

Would you need to redefine the interior functions for the new mesh every time? If so then regenerating these each time can't be avoided, but if not we can design it in such a way that it is unnecessary.

We can also get away with reconstituting the equation just once to start, I have re-ordered them in the post.

I don't think there's SciMLProblem for dealing with such adaptive refinement, but perhaps we can define one which can deal with this. Overall I think such a scheme is achievable, but will need it's own solve method.

Edit: By step 2, I mean converting to the integral form in case that wasn't clear

DanielVandH commented 1 year ago

Hmm, I believe that since refining would introduce new points the system of equations would change, so as you say regenerating them is probably necessary. I was thinking that since points only get added but never deleted, you're only adding new equations, but of course the geometry relationships change.

The reordering is good. I think as long as it's easy to work with the solution, and I can't imagine why not, this type of (adaptive) post-processing should be easy to implement separate to this package/SciML.

xtalax commented 1 year ago

With broadcasted symbolic operations, recompilation is not too much overhead, I'm working on this now. Scalarised, elementwise equations moreso.

We have a system for post solve solution wrapping, check out the MOL solution interface.