Closed rdolbeau closed 7 years ago
Overall this looks good, thanks! I ran a test program containing MPI_Iallreduce and can see that this call is now monitored by IPM.
Can you make some changes before we merge please?
Assign a unique ID for each call. I notice that both MPI_IALLGATHERV_ID and MPI_COMM_FREE_ID have id=60. (This is a nice change to make but not critical because the current version of bin/make_wrappers ignores this id field)
Add your new collective calls to IS_COLLECTIVE_CALL_ID in include/mod_mpi.h. (This is a nice change to make but not critical because the macro is only used when IPM is configured with --enable-coll-details.)
Fix the Fortran version of the new calls: return void rather than integer from mpi_iallgatherv, remove the const from the arguments in mpi_iallgatherv.
Change the names of the arguments to be consistent with the blocking version of these calls. I've been reading bin/make_wrappers and can see it looks for hard-coded strings, e.g. "comm_inout" and "comm_out" for communicators.
If possible can you provide declarations for all non-blocking versions of the collective calls? (This is a great feature to add to IPM)
Thanks, Chris
I'll try to work on this next week, I'm on a trip at the moment. Thanks for the helpful feedback.
The new commit should have the async form of all the already available collectives. It seems IPM is missing barrier, alltoallw, reduce_scatter_block and exscan.
... in Fortran I also fixed a mistake; the request is before the info (since info is always last).
Looks good. I will shortly add a commit so that IPM will continue to work with older MPI implementations.
Hello, IPM isn't tracking the asynchronous collectives. This add support for four of them. Hopefully I'm doing this properly.