lucien-roach / a-dda

Automatically exported from code.google.com/p/a-dda
0 stars 0 forks source link

Optimize MPI operations #20

Open GoogleCodeExporter opened 8 years ago

GoogleCodeExporter commented 8 years ago
Several possible ways to optimize MPI part of the code: 1) use one buffer
for all MPI communications; 2) possibly use MPI_ALLTOALL and 'derived
datatypes' for block_transpose; 3) use 'persistent communication requests'
for repeated communications.

Original issue reported on code.google.com by yurkin on 28 Nov 2008 at 6:49

GoogleCodeExporter commented 8 years ago

Original comment by yurkin on 10 Jun 2011 at 2:03

GoogleCodeExporter commented 8 years ago
Another large area for possible optimization is file i/o using standard MPI 
functions. This can also be used to address issue 90 and issue 31.

Original comment by yurkin on 9 Nov 2011 at 5:31

GoogleCodeExporter commented 8 years ago
Another interesting idea is "distributed arrays" data structures instead the 
manual distribution, as is done now. The advantage may be that MPI runtime will 
be aware of the exact exchanges to be performed inside a single node - they may 
be substantially optimized (sometimes even completely omitted). This may also 
address issue 137.

Original comment by yurkin on 23 Nov 2011 at 4:09