At present we use a simple blocking Sendrecv construct to do MPI boundary exchanges. This is likely not optimal as it requires multiple copies of the buffer. Rewriting the communication to be non-blocking and using custom MPI Datatypes seems like the obvious alternative.
At present we use a simple blocking Sendrecv construct to do MPI boundary exchanges. This is likely not optimal as it requires multiple copies of the buffer. Rewriting the communication to be non-blocking and using custom MPI Datatypes seems like the obvious alternative.