mjayadharan / MMMFE-ST-DD

Fluid flow simulator using MFEM and multiscale space-time sub-domains.
3 stars 1 forks source link
cfd domain-decomposition efficient-algorithm finite-element-methods fluid-dynamics geoscience hpc mpi multi-processing numerical-methods parallel-computing pde space-time

Multiscale Mortar Mixed Finite Element method using novel Space-Time Domain Decomposition (MMMFE-ST-DD)

Fluid flow simulator using multiscale space-time domains.

Note: If you use the code for research purposes, please cite the following original publications: A space-time multiscale mortar mixed finite element method for parabolic equations M Jayadharan, M Kern, M Vohralík, I Yotov SIAM Journal on Numerical Analysis 61 (2), 675-706 and Multiscale mortar mixed finite element methods for the Biot system of poroelasticity M Jayadharan, I Yotov arXiv preprint arXiv:2211.02949

Code developed to simulate time-dependent diffusion problem using Multiscale Mortar Mixed Finite Elements(MMMFE). Model can be easily adapted to simulate other fluid flow models based on linear PDEs. The novelty of the simulator lies in using multiple subdomains with variable time steps and mesh size for each subdomain. This give rise to a space-time DD technique allowing non-matching grids for sub-domains in both space and time dimensions. Sub-domain solves are done in parallel across different processors using MPI. Computed solutions are outputted and visualized on a global space-time grid in the .vtk and .vtu formats. Details of the spaces used and rough algorithm can be found in report.pdf and algorithm.pdf respectively. Theoretical results guaranteeing convergence and stability of the problem along with a priori error estimates have been proved and will appear in SINUM journal and can also be found here. github_space_time_dd

Note:

Author


Manu Jayadharan, Department of Mathematics at University of Pittsburgh 9/17/2019

email: manu.jayadharan@gmail.com, manu.jayadharan@pitt.edu
researchgate
linkedin

Collaborators


New updates: Aug 2020

deal.ii 9.5.2 requirement (latest at the time)


Need deal.ii configured with mpi to compile and run the simulations. Latest version of dealii can be found at : https://www.dealii.org/download.html

deal.ii installation instruction: Follow readme file to install latest version of deal.ii with -DDEAL_II_WITH_MPI=ON .. -DCMAKE_PREFIX_PATH=path_to_mpi_lib flags to cmake.
Note that if you have trouble finding the mpi library while building, do the following, manually pass the location of the compiler files to cmake as follows:

cmake -DCMAKE_C_COMPILER="</location to/mpicc"\
              -DCMAKE_CXX_COMPILER="/location to/mpicxx"\
              -DCMAKE_Fortran_COMPILER="/location to/mpif90"\ <..rest of the arguments to cmake..>

A thread on how to solve this issue can be found here.
If you still have trouble configuring deal.ii with mpi, please seek help at this dedicated google group or contact the author.

Compilation instructions.


cmake -DDEAL_II_DIR=/path to dealii installation folder/ . from the main directory

make release for faster compilations

make debug for more careful compilations with warnings

mpirun -n 'j' DarcyVT where j is the number of subdomains(processses)

Please contact the author for further instructions.

Quick start guide for the simulator.


Reading from parameter file

Mixed boundary condition

Solution plots

Further improvements.

  1. The bottle neck in the simulation is where we do the projections across the interface from mortar to subdomain space-time mesh and vice-versa. This is mainly due to the inefficiency of the built in FEFieldFunction() from deal.ii which could be slow in finding the quadrature points around a general point for FE in a different mesh. This could be sped up significantly by reimplementing the project_boundary_value() subroutine in projector.h where we could also save the inner product between basis functions from FE spaces coming from different meshes( in this case, the space-time mesh in subdomain and in the mortar) and use this in the remaining projections in the iteration. More information can be found here: note1, note2.
  2. Optimization could be done in terms of storage if we save the FEValue and FEFaceValues objects during the time-stepping iterations. But currently, this is not needed because the calculations are not memory intense yet.
  3. Implementing pre-conditioner for the GMRES iterattions. Further theoretical analysis could accompany this imporovement.