Closed ljcarlin closed 8 months ago
Thanks; is this a good moment to merge before continuing the communication abstraction?
Thanks; is this a good moment to merge before continuing the communication abstraction?
I'll just make the change to stop processes messaging themselves and then it should be good to go.
Okay, that should be good to go @cburstedde.
I didn't change the context stuff we discussed, but I think it's better to think about that in the abstracted version.
Introduced a distributed mode to the GMT example, which can be enabled with the
-d
flag. In this mode each process knows only a fraction of the points and at each iteration points are sent to the relevant processes before refinement. Since points are known by multiple processes we avoid duplication by designating a single owner of each point as responsible for propagating it in the next iteration.Currently the sphere model is the only model set up for distributed mode. However, the communication code is written in such a way that other models could add support without writing their own communication code.
To support distributed mode a model should (in setup) load a distinct subset of points on each process. Each process must set
model->M
as the number of points loaded on that process. A point must be represented by a struct of sizemodel->point_size
and points must be stored in the arraymodel->points
, which hence has byte-sizemodel->M * model->point_size
. The intersection functionmodel->intersect
should be written so that an input ofm
refers to them
th point stored inmodel->points
.The communication code uses
p4est_search_partition
with the model-specified intersection function to determine which process domains each point intersects. Points are then communicated to the relevant processes, and the arraymodel->points
is updated, along with the local point countmodel->M
.