ForestClaw / forestclaw

Quadtree/octree adaptive PDE solver based based on p4est.
http://www.forestclaw.org
BSD 2-Clause "Simplified" License
58 stars 22 forks source link

iterate over parallel ghost patches? #91

Closed donnaaboise closed 7 years ago

donnaaboise commented 9 years ago

Originally reported by: Donna Calhoun (Bitbucket: donnaaboise, GitHub: donnaaboise)


I need a way to make sure that ghost patches have valid ghost cell data. This can be a problem is a ghost patch is on more than one parallel boundary.

This could help to make sure ghost patches have valid ghost cell data.

#!c

fclaw2d_iterate_ghost_patches(domain, ...., level,face_cb);

void face_cb(domain,this_ghost_patch, (void*) user)
{
   for(iface = 0; iface < 4; i++)
   {
      find_ghost_neighbors(this_ghost_patch,iface,ghost_neighbor);

      if (ghost_neighbor != NULL)
      {
           /* Do something only if ghost patches came from different procs */
           if (ghost_neighbor->origin_rank != this_ghost_patch->origin_rank)
           {
                /* Copy/average face data from/to ghost neighbor */
           }       
      }
   }
}

Ghost patches only share only faces, so no corner exchanges would be needed. Also, only ghost patches that originated from different procs should need to do a face exchange. Ghost patches that started out on the same proc will have already exchanged ghost cell data at faces before the parallel exchange.

Something like should work. I suppose a complicating factor is the transform that would be needed at block boundaries....

debug_ghostpatches.png


donnaaboise commented 8 years ago

Original comment by Donna Calhoun (Bitbucket: donnaaboise, GitHub: donnaaboise):


fixed

donnaaboise commented 8 years ago

Original comment by Donna Calhoun (Bitbucket: donnaaboise, GitHub: donnaaboise):


Yes! I haven't seen a NAN in months....

donnaaboise commented 8 years ago

Original comment by Carsten Burstedde (Bitbucket: cburstedde, GitHub: cburstedde):


We have the fclaw2d_domain_indirect_neighbors functionality now. Does it address the issue sufficiently?

donnaaboise commented 9 years ago

Original comment by Donna Calhoun (Bitbucket: donnaaboise, GitHub: donnaaboise):


This “multi-proc corner" problem (corners where three or more procs meet) is almost a non-problem. It would be interesting to figure this out, but it seems that more than 3 or more procs can only meet in very few places in the domain. Something like O(p)? (p = #procs)? And even then, an issue can only arise if, at this multi-proc corner, both fine and coarse grids meet. See attached.

So maybe it would be possible to detect these few outlier cases, and do an exchange only with the (very small) amount of data required to fill ghost cells in patches with corners at which 2 or more remote procs meet. This would eliminate at least one situation in which parallel/serial runs might differ.

Currently, things work, I think, because of the limiting that happens. At least one gradient is valid (the only using only interior data) and this seems to be the one that is chosen (although this could of course be platform dependent).

filament_mpi_16 copy.png

filament_mpi_04 copy.png