mpiwg-hybrid / hybrid-issues

Repository to discuss internal hybrid working group issues
16 stars 2 forks source link

Coarray-fortran #9

Open zerothi opened 2 years ago

zerothi commented 2 years ago

In this workgroup it seems that there is an interest in aligning coarray with MPI?

I would really 2nd this!

As I see it, coarray in fortran hasn't been adopted since it puts too many restrictions on the program. I.e. the implementation of libraries in coarrays makes it very hard to use in hosting programs relying on MPI. A way to transfer a coarray set to an MPI group/communicator for 1-1 correspondence would be great.

If this is not the place, then sorry for the blurp! :)

jeffhammond commented 2 years ago

You might get more interest from https://github.com/mpiwg-rma/rma-issues/issues because MPI RMA and coarray Fortran are both one-sided models, but I don't think it matters that much.

zerothi commented 2 years ago

Does that mean I should replicate the issue there?

I suspected that CAF did not belong there since, while it is indeed RMA, it is more of a fundamental problem (CAF dimension -> communicator) than it is a RMA problem. Not that I wouldn't want to duplicate it? ;)

jeffhammond commented 2 years ago

Regarding coarray teams and MPI groups/communicators, there are two issues:

  1. What is the relationship between coarray images and MPI processes/ranks?
  2. What is the relationship between FORM TEAM and MPI_Comm_create_group, for example?

The hard part is 1. In practice, Intel Fortran, Cray Fortran and GCC with OpenCoarrays all have an equivalent execution model for coarray images and MPI processes, but nothing guarantees this. I don't expect this to ever be standardized by Fortran or MPI, so it's going to be an implementation detail the user/application needs to query/verify.

As for 2, it seems that FORM TEAM is equivalent to MPI_Comm_split, at least for the simpler use cases.

jeffhammond commented 2 years ago

No need to duplicate it. As long as you are aware of the RMA connections, that's sufficient.

zerothi commented 2 years ago

I completely agree that it isn't easy! And it will most likely put restrictions on the way the CAF images are allocated.

The main problem in adoption of CAF is that it is not portable as library (my perspective ;-0). It is difficult, if not impossible to pass a distributed memory array from an MPI application to a CAF library... :(

Simultaneously CAF could in principle on shared mem machines use OpenMP under the hood. So I agree it ain't trivial, but working towards such a goal, I think would be ideal! ;)

jeffhammond commented 2 years ago

If you have data that isn't in coarrays, then you'll have to copy it into that when going from MPI to coarrays, but if you allocate memory with coarrays, there's no reason you can't use that with MPI libraries.

Coarray images can't be threads without compiler hacks to have all the global mutable state be thread-private, which basically means processes.

jeffhammond commented 2 years ago

But you might also want to consider just using MPI RMA instead of coarrays. RMA isn't perfect, but it's a lot more portable than coarrays right now.