ltm-erlangen / deal.ii-qc

quasi-continuum approach implemented using deal.II library
GNU Lesser General Public License v2.1
1 stars 0 forks source link

switch to ::Triangulation to have finer control over ghost cells #6

Open davydden opened 8 years ago

davydden commented 8 years ago

p::shared::Tria can behave in two ways: all cells are ghosts (i.e. no artificial cells), or only a layer of cells around locally_owned are ghosts and the rest is artificial. In order to keep communication to the minimum, we need to have a fine control over the cells for each mpi core that need displacement fields or shape functions.

see https://www.dealii.org/developer/doxygen/deal.II/classparallel_1_1shared_1_1Triangulation.html#a3a08802762e67dec656edeec59b21583

bodduv commented 8 years ago

Frankly, I have to read more about communication between mpi processes in dealii, when and how.

Another point is that the current METIS partitioning in dealii uses no weights for vertices considering computation on each cell take more or less equal time. While in the near future we would need apriori weights assigned to each vertex for partitioning, for now, we need all cells to be ghost.

A short reason is that the distribute_dofs() function would sometimes cut through the atomistic region assigning parts of atomistic region to different mpi processes. If the pair potential cutoff radius is quite large, we might need more layers of ghost cells.

davydden commented 7 years ago

i think we can stick with p::s::Tria even though we need some information from cells further away than one layer around a processor's own cells. We would just have some custom ghost index sets to be able to evaluate solution field at those location and update ghost atoms (those, which do not belong to the locally owned cells).

davydden commented 7 years ago

from Gassmoeller 2016:

Methods that require information from cells further away than one layer around a processor’s own cells pose a significant challenge for massively parallel computations; we will not discuss this case further.

exactly our case ;-)

speaking seriously, i don't think we can go away from storing full triangulation on each MPI core as opposed to parallel::distributed::Triangulation, see image and explanation here https://www.dealii.org/developer/doxygen/deal.II/group__distributed.html