TDycores-Project / TDycore

BSD 2-Clause "Simplified" License
4 stars 0 forks source link

Rework TDycore with polymorphic "ops" tables #197

Closed jeff-cohere closed 2 years ago

jeff-cohere commented 3 years ago

See @jedbrown 's comment here. By using pointers to functions to dispatch mode-specific work, we can avoid a lot of if and switch tests.

In order to make this work, we should identify the operations/behaviors we want to capture in such a table of function pointers.

jeff-cohere commented 3 years ago

TDycore already has an ops table that we can use a bit more extensively. In fact, it looks like #45 makes reference to this table. If we start using it more systematically, I think things will start to make more sense.

This is another issue that would benefit from the discussion in #200.

jeff-cohere commented 2 years ago

Looking through the code, it seems like we're making use of these "operations" only for material properties, forcings, and boundary conditions. Here are some other areas that might deserve their own ops within TDycore:

The ops table also has entries for creation, destruction, and setting things from options. The way we do things now, TDySetFromOptions handles all options for the totality of our supported functionality, so we can probably scuttle these operations. But this is a matter of taste, and I'm open to ideas. Maybe I'll try out an approach or two using some demos and come up with a recommendation.

jeff-cohere commented 2 years ago

Here's an approach that is used in C-based "polymorphic" software libraries that takes advantage of our functional decomposition, and can help prevent people from selecting unsupported or nonsensical combinations of options.

What if we replace TDyCreate, TDySetMode, and TDySetDiscretizationMethod with an internal general "constructor" function that accepts an ops table as its argument, with the functions in that table defining all behaviors for a given configuration (mode + discretization + solver)? E.g.

PetscErrorCode TDyCreate(TDyOps ops, TDy* tdy);

Then we could expose specific functions for creating supported configurations, such as

and so on. The ops table could then be expanded to handle stuff like solves, which could be exposed via appropriate TDy* functions in our interface.

The advantage of this approach is that we wouldn't have to change the library's interface much if we wanted to add another configuration--we'd just add another one of these functions for creating the configuration. We could also take advantage of the specificity of a supported configuration to support specific command-line options that don't make sense in general.

If we wanted to, we could also support a function that specified a specific configuration (TDySetConfig(mode, discretization, solver), perhaps). In order to "minimize the surface area" of the interface, I would recommend that all of the relevant config details are passed to one function. Our current style of using TDySetMode and TDySetDiscretizationMethod separately requires a lot of error checking and implies some promises that we can't keep.

But all of this is also a matter of taste, and I'll defer to what the group wants.

jeff-cohere commented 2 years ago

Another question related to ops: currently we offer the ability to supply a "context pointer" (void *) specific to each function pointer for computing material properties and boundary conditions. This is maximally flexible but also maximally complicated. Do we foresee a need to offer this flexibility versus, say, having a single context pointer for a TDy instance that gets used by all function pointers in the ops table? This is how it's "usually" done with polymorphic types whose behavior is defined by function pointers.

You can see these various context pointers in the p_TDy type: porosityctx, permeabilityctx, and so on.

I can imagine elaborate use cases where we interpolate from tables and such to fill in material properties or boundary conditions. If we're actually interested in doing such things, I understand the need for the complexity. But another solution that arguably captures this concept a bit better is to create proper types for material models and boundary conditions, and have them manage their own contextual data.

bishtgautam commented 2 years ago

I believe we have the context pointer (such as void *porosityctx;) because we duplicated what was is in PETSc. I don't see the need for it.

jedbrown commented 2 years ago

We have separate contexts in PETSc callbacks because it avoids imposing a specific granularity on user code. So the SNES residual and Jacobian implementations could be provided by the same C++ class (context = this) or different classes. Since those void * are not reference counted, there's no harm in using the same context for each callback. In a language with stricter own/borrow semantics (Rust), it would be somewhat more involved (and surely look different).

jeff-cohere commented 2 years ago

Here's an incomplete list of stuff required for this task: