scientificcomputing / mpi-tutorial

An MPI tutorial in Python
https://scientificcomputing.github.io/mpi-tutorial/
MIT License
5 stars 2 forks source link

Topics to cover #2

Open jorgensd opened 1 year ago

jorgensd commented 1 year ago
minrk commented 1 year ago

One question: what's the right level of detail and explanation? e.g. in #6 I implemented examples from an existing tutorial. It seems a bit silly to copy/paste everything from there, but what I have so far is the bare minimum (no explanation, only external links).

When adapting an existing tutorial, I think it probably makes sense to:

  1. include a brief high-level description of what is covered (i.e. what is send/recv, what is it for)
  2. link to existing tutorial with reference for more detail
  3. add any discussion specific to us that's not in the original (e.g. Python types, numpy, buffers)
minrk commented 1 year ago

I think we should also come up with a clear description of who our tutorial is for, and how it is distinct from vs links to other tutorials. For example:

hherlyng commented 1 year ago

I second that Min.

I think we should also come up with a clear description of who our tutorial is for, and how it is distinct from vs links to other tutorials.

hherlyng commented 1 year ago

Here is the resource that Jørgen posted: https://newfrac.gitlab.io/newfrac-fenicsx-training/05-dolfinx-parallel/dolfinx-parallel.html

How about creating an "MPI for Dolfinx" tutorial similar to the above link? That one's outdated in terms of the dolfinx version, the code doesn't work with the newest dolfinx version.

jorgensd commented 1 year ago

What I had in mind was to make a tutorial that is more interactive that the once we have mentioned previously. Those usually have a program in one cell/block, such as

from mpi4py import MPI
import dolfinx
import dolfinx.io

# DOLFINx uses mpi4py communicators.
comm = MPI.COMM_WORLD

def mpi_print(s):
    print(f"Rank {comm.rank}: {s}")

# When you construct a mesh you must pass an MPI communicator.
# The mesh will automatically be *distributed* over the ranks of the MPI communicator.
# Important: In this script we use dolfinx.cpp.mesh.GhostMode.none.
# This is *not* the default (dolfinx.cpp.mesh.GhostMode.shared_facet).
# We will discuss the effects of the ghost_mode parameter in the next section.
mesh = dolfinx.UnitSquareMesh(comm, 1, 1, diagonal="right", ghost_mode=dolfinx.cpp.mesh.GhostMode.none)
mesh.topology.create_connectivity_all()

mpi_print(f"Number of local cells: {mesh.topology.index_map(2).size_local}")
mpi_print(f"Number of global cells: {mesh.topology.index_map(2).size_global}")
mpi_print(f"Number of local vertices: {mesh.topology.index_map(0).size_local}")
mpi_print("Cell (dim = 2) to vertex (dim = 0) connectivity")
mpi_print(mesh.topology.connectivity(2, 0))

or they are split into static chunks, such as https://mpitutorial.com/tutorials/mpi-send-and-receive/

I think we should follow how MPI tutorial does their code (but make it actually executable by using python and ipynb). And make sure we explain in detail what we are doing.