nerscadmin / IPM

Integrated Performance Monitoring for High Performance Computing
http://ipm-hpc.org
GNU Lesser General Public License v2.1
86 stars 36 forks source link

do not handle MPI_IN_PLACE in Fortran with Open MPI higher than 1.6 #6

Closed ggouaillardet closed 9 years ago

ggouaillardet commented 9 years ago

OMPI_F2C_IN_PLACE macro is only available until Open MPI 1.6, so do MPI_IN_PLACE cannot be handled in Fortran. this fixes nerscadmin/IPM@8f628dadc502b3e0113d6ab3075bf66b107f07e5 with Open MPI > 1.6

ggouaillardet commented 9 years ago

Open MPI 1.6 works only if configure'd with --with-devel-headers, so this PR is not fully robust (but better than nothing ...)

jsquyres commented 9 years ago

Indeed, I would not recommend using the OMPI_F2C_IN_PLACE macro at all -- it's internal to Open MPI, and, as you have seen, subject to change.

njwright commented 9 years ago

what has it been replaced by ?

We need a way of passing MPI_IN_PLACE from fortran to C and back again.

On Mon, Jun 15, 2015 at 5:48 AM, Jeff Squyres notifications@github.com wrote:

Indeed, I would not recommend using the OMPI_F2C_IN_PLACE macro at all -- it's internal to Open MPI, and, as you have seen, subject to change.

— Reply to this email directly or view it on GitHub https://github.com/nerscadmin/IPM/pull/6#issuecomment-112055078.

Nick.

swfrench commented 9 years ago

Thanks for catching this!

Assuming we can deduce (a) whether we can use weak symbols or better yet (b) the appropriate fortran name mangling scheme, could we do something like in ompi/mpi/fortran/base/constants.h? Would the the same set of internal symbols also be portable to 1.7?

jsquyres commented 9 years ago

@njwright There is no MPI-spec-defined mechanism in MPI for passing MPI_IN_PLACE back and forth between C and Fortran, sorry.

Specifically: that symbol is allowed to have a different value in both languages -- they are not intended to be comparable.

jsquyres commented 9 years ago

@swfrench Sorry, ompi/mpi/fortran/base/constants.h is also internal and subject the change. :-(

Let's take a step back: what is it that you're trying to do? I.e., why do you need to know in C that Fortran passed you MPI_IN_PLACE?

njwright commented 9 years ago

We are trying to profile this simple code below.

The issue is IPM is written in C - and when you catch the call from a fortran code - MPI_IN_PLACE it doesn't get passed to the MPI layer in C properly - as they are not the same in Fortran and C.

The logic in the IPM code I took from the OpenMPI library itself -what does it do now with the removal of this macro ?

Thanks, Nick.


% cat a1.F program a1 real8 linfty, linftylocal integer me, ierr include 'mpif.h' call mpi_init(ierr) call mpi_comm_rank(mpi_comm_world,me,ierr) linfty = me CALL MPI_ALLREDUCE(MPI_IN_PLACE, LINFTY, 1, &MPI_DOUBLE_PRECISION,MPI_MAX,MPI_COMM_WORLD,ierr) print , me, linfty call mpi_finalize(ierr) end

On Mon, Jun 15, 2015 at 11:24 AM, Jeff Squyres notifications@github.com wrote:

@swfrench https://github.com/swfrench Sorry, ompi/mpi/fortran/base/constants.h is also internal and subject the change. :-(

Let's take a step back: what is it that you're trying to do? I.e., why do you need to know in C that Fortran passed you MPI_IN_PLACE?

— Reply to this email directly or view it on GitHub https://github.com/nerscadmin/IPM/pull/6#issuecomment-112161296.

Nick.

jsquyres commented 9 years ago

@njwright Ah -- forgive me, I know nothing about IPM. :( Is IPM an interposition tool where you're intercepting MPI calls and calling PMPI behind the scenes? If so, you really need to be intercepting Fortran calls in Fortran -- not C. Not only will you not have to deal with symbol mangling issues, you also won't have this problem with the "special" sentinel values (like MPI_IN_PLACE). E.g., if you intercept MPI_ALLREDUCE in Fortran, you can compare to MPI_IN_PLACE in Fortran, and then call your back-end IPM C routine with either a flag saying "MPI_IN_PLACE was used for the choice buffer", or perhaps you might be able to wrangle the C value for MPI_IN_PLACE to your Fortran routine...? You can use the BIND(C) syntax in Fortran to make it easy to call C from Fortran, too.

ggouaillardet commented 9 years ago

@jsquyres yes, IPM is a profiling tool that is interposed between the appli and MPI currently, MPI C symbols are redefined

MPI_Allreduce(...) {
   start_profiling(...);
   PMPI_Allreduce(...);
   store_profiling_info(...);
}

Fortran symbols are also redefined to call the redefined C MPI symbol

void mpi_allreduce_ (...) {
call MPI_Allreduce(...)
}

this is all generated by some wrappers from an IPM key file

46|MPI_ALLREDUCE_ID|int MPI_Allreduce(const void *sbuf, void *rbuf, int scount, MPI_Datatype stype, MPI_Op op, MPI_Comm comm_in)|void mpi_allreduce(void *sbuf, void *rbuf, int *scount, int *stype, int *op, int *comm_in, int *info)|CS_FSYNC,BYTES_SCOUNT,RANK_ALL,DATA_COLLECTIVE,COMM_ONE

this approach requires Fortran MPI_IN_PLACE (and friends ? MPI_BOTTOM, ...) is translated to C MPI_IN_PLACE (and friends ?)

i can only guess this approach for Fortran was faster to implement, and easily ensure Fortran MPI subroutines are "wrapped" once and once only.

i agree the clean way is to intercept Fortran calls in Fortran (and have an other library intercept Fortran 2008 calls in Fortran 2008, but this is an other story ...). at first glance, that would required a revamp of the Fortran wrapper generator and the IPM key file

jsquyres commented 9 years ago

@ggouaillardet Gotcha. Good show on #8!

I remember telling several MPI tool authors about a year or two ago that, especially with MPI-3's mpi_f08 bindings, they needed to write their Fortran MPI wrappers in Fortran. The look of abject horror on their faces told the whole story. :frowning: Meaning: I am quite sure that IPM is not alone -- I think the vast majority of MPI-interception tools that catch calls to the (mpif.h) Fortran bindings using C wrappers instead of Fortran wrappers. Tools like IPM will definitely need to change to catch the "mpi" and "mpi_f08" Fortran bindings wrappers. The "special" sentinels values (like MPI_IN_PLACE and MPI_BOTTOM and others) and name mangling from module symbols require it. Additionally, there's some MPI Fortran functions that cannot be implemented by simply calling their C counterparts. E.g., if MPI_COMM_CREATE_KEYVAL is called in any of the Fortran bindings, it cannot be implemented by catching the invocation in a C wrapper and then calling the C MPI_Comm_create_keyval() -- the MPI implementation needs to know that that function was invoked from Fortran (there's a handful of these kinds of functions in MPI).

swfrench commented 9 years ago

Hi Gilles - Thanks again for PR #8. Marking this as closed since the latter was merged.

nerscadmin commented 9 years ago

Hi Gilles,

Thanks for the contribution. We're grateful for your help!

-David