PyMPDATA-MPI constitutes a PyMPDATA + numba-mpi coupler enabling numerical solutions of transport equations with the MPDATA numerical scheme in a hybrid parallelisation model with both multi-threading and MPI distributed memory communication. PyMPDATA-MPI adapts to API of PyMPDATA offering domain decomposition logic.
In a minimal setup, PyMPDATA-MPI can be used to solve the following transport equation: $$\partial_t (G \psi) + \nabla \cdot (Gu \psi)= 0$$ in an environment with multiple nodes. Every node (process) is responsible for computing its part of the decomposed domain.
In spherical geometry, the $G$ factor represents the Jacobian of coordinate transformation.
In this example (based on a test case from Williamson & Rasch 1989),
domain decomposition is done cutting the sphere along meridians.
The inner dimension uses the MPIPolar
boundary condition class, while the outer dimension uses
MPIPeriodic
.
Note that the spherical animations below depict simulations without MPDATA corrective iterations,
i.e. only plain first-order upwind scheme is used (FIX ME).
In the cartesian example below (based on a test case from Arabas et al. 2014), a constant advector field $u$ is used (and $G=1$). MPI (Message Passing Interface) is used for handling data transfers and synchronisation with the domain decomposition across MPI workers done in either inner or in the outer dimension (user setting). Multi-threading (using, e.g., OpenMP via Numba) is used for shared-memory parallelisation within subdomains (indicated by dotted lines in the animations below) with threading subdomain split done across the inner dimension (internal PyMPDATA logic). In this example, two corrective MPDATA iterations are employed.
flowchart BT
H5PY ---> HDF{{HDF5}}
subgraph pythonic-dependencies [Python]
TESTS --> H[pytest-mpi]
subgraph PyMPDATA-MPI ["PyMPDATA-MPI"]
TESTS["PyMPDATA-MPI[tests]"] --> CASES(simulation scenarios)
A1["PyMPDATA-MPI[examples]"] --> CASES
CASES --> D[PyMPDATA-MPI]
end
A1 ---> C[py-modelrunner]
CASES ---> H5PY[h5py]
D --> E[numba-mpi]
H --> X[pytest]
E --> N
F --> N[Numba]
D --> F[PyMPDATA]
end
H ---> MPI
C ---> slurm{{slurm}}
N --> OMPI{{OpenMP}}
N --> L{{LLVM}}
E ---> MPI{{MPI}}
HDF --> MPI
slurm --> MPI
style D fill:#7ae7ff,stroke-width:2px,color:#2B2B2B
click H "https://pypi.org/p/pytest-mpi"
click X "https://pypi.org/p/pytest"
click F "https://pypi.org/p/PyMPDATA"
click N "https://pypi.org/p/numba"
click C "https://pypi.org/p/py-modelrunner"
click H5PY "https://pypi.org/p/h5py"
click E "https://pypi.org/p/numba-mpi"
click A1 "https://pypi.org/p/PyMPDATA-MPI"
click D "https://pypi.org/p/PyMPDATA-MPI"
click TESTS "https://pypi.org/p/PyMPDATA-MPI"
Rectangular boxes indicate pip-installable Python packages (click to go to pypi.org package site).
PyMPDATA-MPI started as an MSc project of Kacper Derlatka (@Delcior) mentored by @slayoo.
Development of PyMPDATA-MPI has been supported by the Poland's National Science Centre (grant no. 2020/39/D/ST10/01220).
We acknowledge Poland’s high-performance computing infrastructure PLGrid (HPC Centers: ACK Cyfronet AGH) for providing computer facilities and support within computational grant no. PLG/2023/016369
copyright: Jagiellonian University & AGH University of Krakow
licence: GPL v3
mpi4py
, but rather numba-mpi
)