jeremiah-corrado / fem_2d

A rust library for 2D Finite Element Method computations
https://crates.io/crates/fem_2d
MIT License
8 stars 3 forks source link

Observations and suggestions for the JOSS review. #2

Open YohannDudouit opened 2 years ago

YohannDudouit commented 2 years ago

Here are some observations and suggestions that you're welcome to consider:

Overall I think the code could push further the separation between abstraction and implementation, and provide a few more building blocks for more general finite element problems. For instance, the BasisFn struct seem much closer to implementation details than abstracting the mathematical concept of a basis. Many parts of the library are undocumented, making it difficult to grasp the details of the library. The module structure could also better represent the structure of the project, it can be difficult to navigate and grasp the organization of the library, for instance fem_2d::basis::glq and fem_2d::integration::integrals::glq why are these separated?

On the structure/organization/documentation/naming:

On the design:

pub trait CurlShapeFn: ShapeFn { ... }

pub trait HierarchicalShapeFn: ShapeFn { ... }

pub trait HierarchicalCurlShapeFn: CurlShapeFn + HierarchicalShapeFn { ... }


- It is not clear how the Curl-conformity contract is verified, is it through the interface? Same for the hierarchical property.

- I think the pattern `Object`/`ParObject` is not very idiomatic, two implementations of the same concept (same semantic) should not result in two disjoint types. I think they should either be unified behind a trait, or their parallel policy should become a type parameter `Object<SerialPolicy/ParallelPolicy>`.

- The `Materials` struct seems to be very specialized to problems where each element's physical properties can be described with two scalar constants per element, this seems very limited to simple Maxwell problems. Is it easy for a user to use its own physical description? How would that work if physical properties were to be defined with a function inside each element?

## On the implementation:

- The only type implementing `ShapeFn` is `KOLShapeFn`, what does "KOL" stands for? The documentation could be more extensive: "A simple Curl-Conforming Hierarchical Shape Function".

- It would be helpful for the user to implement one or some of the classic non-Curl basis functions: GL, GLL, Bernstein as a reference implementation.

- The only integral operator defined is `curl_curl`, it would help new users implement their own integrals if classic operators like a mass operator `< u, v >`, and Poisson operator `< ∇u, ∇v >` were also implemented.

- Would it be possible to have access to some linear solvers, these also seem essential for finite element analysis.
jeremiah-corrado commented 2 years ago

Hi, @YohannDudouit, thanks again for the feedback! Quite a few changes have been made to the crate to address your concerns. I also updated the paper to reflect these changes.

The following is a point-by-point response to your observations/suggestions. Each point either explains how the crate was updated to reflect your suggested change, or explains why the suggested change was not made. There are also a few questions mixed in asking for clarification.

Structure/Organization/Documentation

• The module structure has been separated into two high level modules: fem_domain and fem_problem. fem_domain contains the Domain, Mesh and Basis Function structures (everything needed to set up an FEM Domain). fem_problem contains the integration and linear algebra stuff for defining and solving FEM problems. There is still a prelude module with most of the important structures, functions and traits. • All the GLQ functionality has been moved into a single GLQ module under the Integration module (with improved documentation). • ShapeFn has been renamed to HierCurlBasisSpace. This is a special case of the more general concept: Basis-Space. The new documentation--along with the docs for HierCurlBasisFn--should better explain how to create a custom implementation. • The names of the basis function structs/traits have all been changed. There is now a HierBasisFn Trait that will be the parent trait for any other Hierarchical Basis function Types. There is currently no Trait for non-hierarchical basis functions. • The new names in the basis module are now more descriptive: HierBasisFn refers to the actual instantiation over some Elem, while a BasisFnSpace describes the generically-specified functional space used to compute the instantiation.
• I think the updated documentation is sufficient to clarify the conceptual difference between Elem and Element • I am having trouble understanding the logic behind making this separation. The Mesh describes the set of Elements and Elems that make up the discretization. These structures themselves are composed of Nodes and Edges. They are also the structures that are subject to h-refinement and p-refinement. All of them are operating within the same conceptual space and thus fit nicely within the same module. • M2D and V2D should be in the space module as part of the broader mesh module because they describe points and transformations in the 2D physical/parametric spaces where elements are defined. These are specialized data structures that do not have any utility to the linear algebra portion of the crate

On the Design:

• Similar to the point above, I don't see the benefit in having another mesh Type. Currently the Mesh structure describes the set of Elements and Elems that make up the FEM Domain. What would be gained by making a conceptual separation between the geometric features (Cells, Faces, Edges, Nodes), and the hp-refinement stuff? As far as I understand, there is never a stage of the FEM simulation process where we would want an instantiated definition of the Mesh that does not have the capacity to be refined. • I am defining a Degree-of-Freedom as a set of one or more BasisSpecs. A BasisSpec gives an abstract definition for a Basis Function over an individual Elem. Specifically, it defines the expansion-orders, direction, and classification (Element-Type, Edge-Type, Node-Type). Element-Type basis functions constitute degrees of freedom on their own, as they are zero-valued along the relevant edges. Pairs of Edge-Type functions constitute degrees of freedom, as they are non-zero on one relevant edge, and must share a scalar value to enforce continuity. Sets of four Node-Type functions make up degrees of freedom for the same reason. As such, a DoF describes a unit of the global solution which can take a scalar value. My design may be clearer if you look under the hood at the BasisSpecGroup structure in dof.rs (Perhaps this field of DoF should be made public?). • There is an impl From<SparseMatrix> for Nalgebra::DMatrix<f64> in the linalg module. A user could definitely decompose a GEP into its individual matrices and use Nalgebra functions to manipulate them. Is this what you mean? • I haven’t implemented all these traits; however, I think I’ve put a nice structure in place for those alternate basis function types to be developed over time. • Continuity conditions are checked explicitly on line 42 of galerkin.rs. Mathematically speaking, continuity is enforced in the design of the HierCurlBasisFn struct, and the matching algorithm for the BasisSpecs. • The ParBasisFnSampler and BasisFnSampler structs were combined. • I think more complex material definitions are something that other researcher can implement if there is interest. My understanding is that functionally defined material properties are only useful for defining meta-materials which are a pretty specialized mathematical tool. The current setup gives a decent amount of freedom, as both material constants are complex-valued.

On the implementation

• There is also a HierMaxOrtho Basis Function Space which can be included with the max_ortho_basis feature-flag on the nightly compiler. • I’ve included the mathematical definition of the HierPoly basis space in the documentation, as well as a reference to the full paper associated with the HierMaxOrtho basis space. As such, I think there is sufficient context for someone to implement other basis spaces. • There is a mass operator. The Poisson operator would be more useful with H(Div) continuity conditions, which are not implemented yet. • I think this is also something that other researcher might want to implement if they need it. The libraries module structure is friendly to these types of additions. Linear solvers could sit next to the eigensolvers in the linalg module. For example, a function called nalgebra_solve_lin could be added to the linalg::nalgebra_solve module. And the accompanying sampling function could sit next to galerkin_sample_gep_hcurl in the ‘galerkin’ module, and reuse the [SparseMatrix] (https://docs.rs/fem_2d/0.2.1/fem_2d/fem_problem/linalg/sparse_matrix/struct.SparseMatrix.html) structure

YohannDudouit commented 2 years ago

Hi @jeremiah-corrado , I did the thought experiment of looking at implementing the most basic example I could think of: a mass () system (Ax = b) for H1 space. Such a simple example seems to require a major redesign/contribution, tell me what you think. Here is my conclusion of what I would need to do:

Redesign HierCurlIntegral:

Finally, Rust can behave in many ways like a functional programming language, which makes me wonder why there isn't simply a generic implementation of numerical integration . It seems fairly trivial to directly implement the generic case, instead of having HierCurlIntegral, HierIntegral, CurlIntegral, H1Integral, DivIntegral with non-intuitive different interfaces. It would probably make sense to introduce PhysicalCoordinates and ReferenceCoordinates too, allowing to perform change of coordinates automatically during the numerical integration: the multiplication by the Jacobian inverse for the gradient of H1BasisFn, and Piola transformations in the case of CurlBasisFn.

Introduce the notion of linear form to compute b.

Currently, all integrals are expected to be of the form . What about source terms of the form ? where f is not a polynomial. This seems to be mainly a limitation due to the design choice to represent "integrals".

Introduce the notion of non-constant coefficients.

Currently only coefficients that are constant per element are supported, and they expect exactly two quantities... This seems very limited for a finite element code. What about coefficient described by polynomials just as the solution? What about analytic coefficients using physical coordinates? Here also the design of the integrals will prevent extensibility.

Link with a linear algebra library.

jeremiah-corrado commented 2 years ago

Hi @YohannDudouit, thank you for the additional feedback! Below we responded to your suggestions and improvements.

FEM_2D Goals

First, given that we significantly improved the manuscript in the latest version based on the general feedback we received, we wanted to clarify the library's goals, as we agree that the initial draft of the paper was poorly explained and unclear in this regard. For the library's capabilities, we emphasized the core, novel functionality, namely the ability to conduct Refinement-by-Superposition (RBS) hp-refinements with theoretically optimal (i.e., exponential) rates of convergence. We also believe strongly in the need for greater replicability in applied mathematics and scientific computing research, which was one of the primary motivating factors in open sourcing this library. We also hope to assist others in the FEM research community to investigate Refinement-by-Superposition methods via our library or through their own implementations based on or inspired by our library. Our intent is not to replace or even compete with the excellent finite element libraries with decades worth of patches, but instead to supply a novel, yet fully usable approach to arbitrary levels of hp-refinements, isotropic and anisotropic, which is not supported by existing open-source projects.

In order to support this type of work, we included a light amount of functionality targeting the Maxwell Eigenvalue Problem that can be extended to other problems and applications. However, with our focus on providing a novel approach to Refinement-by-Superposition, the bulk of the library (in terms of breadth of functionality, number of lines of code, and depth of documentation) is located in the Mesh implementation. Or, more specifically, in the hp-refinement API, which has distinct advantages over other FEM software due to the underlying Refinement-by-Superposition implementation. With this in mind, the code in the mesh module is of primary importance.

We amended this description in the latest version of the paper to clarify our focus. We also agree that minor adjustments to the implementation framework (particularly surrounding integration), which we have described below, can simplify extensions by other researchers/practitioners to their applications of interest and enhance the generalizability and broad applicability of our library for finite element methods.

Integration

With that aside, we definitely see your point about the generic interface for integration. In its current state, the various layers of Traits and Structures are somewhat convoluted, and it would be much nicer to have an overarching Integral Trait that applies to any kind of Basis-Function.

Below, I've given a brief description of some of the current design choices in response to your questions. I will then describe a modified design that aims to address your concerns.

Functional Style GLQ:

There is a functional-style Gauss-Quad integration function in the GLQ Module, which is similar to what you suggested. This function is used in the implementation of the Integrals. The difference is that the generically defined Closure is of the type:

Fn(usize, usize) -> f64

This assumes that the basis function has already been evaluated and simply needs to be sampled at a set of indices in parametric space.

Alternatively, per your suggestion, basis functions (or functions of basis functions) could be computed on the fly in a closure like:

Fn(f64, f64) -> f64

However, this choice does not permit pre-computation/memoization of basis-function evaluations (both the HierPoly and MaxOrtho Basis Spaces are somewhat expensive to compute). The BasisFnSampler is responsible for caching computed Basis-Functions during Galerkin Sampling.

Vectorial-Direction and Expansion-Order Arguments:

Furthermore, we included the q_dir, p_dir, p_orders, and q_orders arguments in the integrate functions, because a HierBasisFn pre-computes ALL the Basis Functions for both directions (BasisDir::U and BasisDir::V) from zero-ordered up to the specified maximum orders i_max and j_max. (Or, more accurately, it computes sets of vectors for each direction and order, so that sampling a BasisFn at a specific point amounts to a single multiplication operation.)

As such, the current integrate functions essentially do the following:

Separation of GLQ Weights:

The weight vectors are separate from the basis function evaluation for two reasons:

New Design

The following sample outlines a redesigned integration framework to permit a more straightforward implementation of integral operators for H1 basis-functions (or basis-functions of any other kind). The relevant H1 Traits and Structs are included to show the general structure/orgainization of their implementation.

Most importantly, the new Integral Trait more closely resembles the generic mathematical concept of an inner product between two basis functions.

(I've included the unimplemented!() macro and some dummy functions as necessary so that you can paste this into an IDE, compile, make changes, etc.)

use std::marker::PhantomData;

fn main() {
    // setup GLQ points
    let [xu, wu] = glq_points_and_weights();
    let [xv, wv] = glq_points_and_weights();

    // setup a sampler for an H(curl) implementation of PolyBasis
    let mut hc_bs_sampler = HCurlBasisFnSampler::<PolyBasis>::new(xu, xv);

    // sample two basis functions (Elem-specific info is missing in this example, but isn't relevant to the design)
    let basis_p = hc_bs_sampler.sample(BasisDir::U, [1, 2]);
    let basis_q = hc_bs_sampler.sample(BasisDir::U, [3, 0]);

    // setup a Mass integrator
    let mass = Mass::with_weights(wu, wv);

    // compute an integral with the two basis function samples
    // The HCurl implementation of Mass's integral impl is automatically used due to the argument type
    let solution = mass.integrate(basis_p, basis_q);
}

// ---------------------------------------------------------------------------------------------
// Basis Spaces (implementation details are left out, as they are not relevant to this design summary)
// ---------------------------------------------------------------------------------------------

// have traits for each kind of basis function space
pub trait HCurlBasisSpace {}
pub trait H1BasisSpace {}

// implement some specific basis spaces. Overlap is allowed (i.e. there is a PolyBasis impl for both H1 and HCurl)
pub struct PolyBasis {}
impl HCurlBasisSpace for PolyBasis {}
impl H1BasisSpace for PolyBasis {}

pub struct SomeOtherBasisSpace {}
impl HCurlBasisSpace for SomeOtherBasisSpace {}

// ---------------------------------------------------------------------------------------------
// Basis Function Samplers
// ---------------------------------------------------------------------------------------------

// define a sampler specifically for H(Curl) Basis-Functions
pub struct HCurlBasisFnSampler<B: HCurlBasisSpace> {
    xu: Vec<f64>,
    xv: Vec<f64>,
    bs: PhantomData<B>,
}

impl<B: HCurlBasisSpace> HCurlBasisFnSampler<B> {
    pub fn new(xu: Vec<f64>, xv: Vec<f64>) -> Self {
        Self {
            xu,
            xv,
            bs: PhantomData,
        }
    }

    // This is the most important line of this example!
    // The direction and order are specified by the caller here, rather than in the arguments to integrate()
    pub fn sample(&self, dir: BasisDir, orders: [usize; 2]) -> HCurlBFSample {
        unimplemented!()
    }
}

// define a separate sampler specifically for H1 Basis Functions
pub struct H1BasisFnSampler<B: H1BasisSpace> {
    xu: Vec<f64>,
    xv: Vec<f64>,
    bs: PhantomData<B>,
}

impl<B: H1BasisSpace> H1BasisFnSampler<B> {
    pub fn new(xu: Vec<f64>, xv: Vec<f64>) -> Self {
        Self {
            xu,
            xv,
            bs: PhantomData,
        }
    }

    // The other most important line of this example!
    // The direction does NOT have to be provided as an argument for the H1 basis sample
    pub fn sample(&self, orders: [usize; 2]) -> H1BFSample {
        unimplemented!()
    }
}

// A sample of an H(Curl) basis function
// Contains only the information needed to evaluate the basis function (or its derivative) 
//    at a specific order (i, j) and with a specific direction
pub struct HCurlBFSample<'a> {
    dir: BasisDir,
    u: &'a Vec<f64>,
    v: &'a Vec<f64>,
    u_d1: &'a Vec<f64>,
    v_d1: &'a Vec<f64>,
}

impl<'a> HCurlBFSample<'a> {
    pub fn from_vectors(
        dir: BasisDir,
        u: &'a Vec<f64>,
        v: &'a Vec<f64>,
        u_d1: &'a Vec<f64>,
        v_d1: &'a Vec<f64>,
    ) -> Self {
        Self {
            dir,
            u,
            v,
            u_d1,
            v_d1,
        }
    }

    pub fn f(&self, [m, n]: [usize; 2]) -> f64 {
        unimplemented!()
    }

    pub fn curl(&self, [m, n]: [usize; 2]) -> f64 {
        unimplemented!()
    }
}

// A sample of an H1 basis function with some specific order
pub struct H1BFSample {}

impl H1BFSample {
    pub fn f(&self, [m, n]: [usize; 2]) -> f64 {
        unimplemented!()
    }
}

// ---------------------------------------------------------------------------------------------
// Integral Trait and Operators
// ---------------------------------------------------------------------------------------------

// The simplified integral trait which can handle pairs of any kind of Basis Funciton
pub trait Integral<BFS> {
    fn integrate(&self, p_basis_fn: BFS, q_basis_fn: BFS) -> f64;
}

// an example integration operator: Mass
pub struct Mass {
    wu: Vec<f64>,
    wv: Vec<f64>,
}

impl Mass {
    pub fn with_weights(wu: Vec<f64>, wv: Vec<f64>) -> Self {
        Self { wu, wv }
    }
}

// The H(curl) implementation of the Mass integral
impl<'a> Integral<HCurlBFSample<'a>> for Mass {
    fn integrate(&self, p_basis_fn: HCurlBFSample, q_basis_fn: HCurlBFSample) -> f64 {
        unimplemented!()
    }
}

// The H1 implementation of the Mass integral
impl Integral<H1BFSample> for Mass {
    fn integrate(&self, p_basis_fn: H1BFSample, q_basis_fn: H1BFSample) -> f64 {
        unimplemented!()
    }
}

// ---------------------------------------------------------------------------------------------
// Utility Stuff
// ---------------------------------------------------------------------------------------------

// dummy GLQ points function
pub fn glq_points_and_weights() -> [Vec<f64>; 2] {
    unimplemented!()
}

// keep the current BasisDir Enum for vectorial direction selection
pub enum BasisDir {
    U,
    V,
}

Please let me know if the above redesign would address your concerns. If so, I will work on modifying the relevant portions of the library accordingly.

Linear Form

The library's module structure is friendly to this addition, and we believe it would be reasonable for other researchers and practitioners to implement this using the current Galerkin Sampling function as an example. The integration redesign above would also make the integration portion of this addition more straightforward, as the same Integral Trait could be used (where f would be a different Kind of BasisFnSample).

Non-Constant Material Coefficients

Similarly, since the Refinement-by-Superposition approach is agnostic to the forms of the integrands (including non-constant coefficients), we believe this could eventually be added via open source contribution. The methods currently being used to map from an Element's real space to a descendant Elem's parametric space (during integration) could be readily used to map a functional material parameter on an Element to any child Elem during integration.

jedbrown commented 1 year ago

I see your intent with quadrature and why a naive $f(x,y)$ interface isn't good for tabulated basis functions. However, a common technique is to provide additional inputs, such as $f(x,y,u,v)$, where $u,v$ have been tabulated at quadrature points. That said, optimized implementations (such as in libCEED) will handle the test functions via matrix operations rather than in an integral interface that returns a scalar. (It's quite surprising the extent to which finite element books, libraries, and commercial codes use inefficient abstractions and data structures.) Anyway, this is kind of a rabbit hole, but it's something you may want to revisit if/when finite element algebra becomes a bottleneck.