mpi-forum / mpi-issues

Tickets for the MPI Forum
http://www.mpi-forum.org/
67 stars 7 forks source link

define language-agnostic, IEEE types #66

Open jeffhammond opened 7 years ago

jeffhammond commented 7 years ago

Problem

We define datatypes in terms of ISO languages. It would solve a lot of problems, e.g. #65, if we could define language-agnostic types aligned with the IEEE754 floating-point standard. Users would need to map those MPI datatypes to equivalent language types.

This was originally proposed in https://lists.mpi-forum.org/pipermail/mpiwg-p2p/2017-June/000439.html.

Proposal

Introduce MPI datatypes associated with the following:

One can imagine MPI dataype names like MPI_IEEE_BINARY64, but symbol names will be decided later.

The MPI standard will require that the storage format for these types match IEEE, but should only encourage the associated computations performed in reduction operators be done in an IEEE-compliant manner.

Given the lesser importance of decimal floating-point in ISO languages, these features should be considered optional. However, allowing MPI to support decimal floating-point may allow for better support for languages like Python.

Changes to the Text

TODO

Impact on Implementations

Implementations will need to implement these types, which may require work in cases where support for e.g. MPI_REAL2 or MPI_REAL16 is lacking.

Decimal floating-point reductions may require new code, although making these types optional reduces the implementation burden if compiler support is not available and MPI implementations would otherwise have to implement this by hand.

A high-quality implementation may need to use special care when implementing reduction operators that can lose precision.

Impact on Users

These types will make it much easier to use these numerical formats across languages. We should not force C/C++ users to rely on Fortran types for 16- and 128-bit binary floating-point, for example.

References

ahori commented 7 years ago

I agree with Jeff.

Let's think about a case where an MPI job consists of a C program and a Fortran program (language heterogeneity). When a collective operation, such as BCAST, takes place, the C program call the collective with, for example, MPI_INT and the Fortran program call the same collective with MPI_INTEGER. However, the standard says that the type signature must be the same in the same collective call. (I believe there is no documentation describing MPI_INT of C and MPI_INTEGER of Fortran are the same in the standard).

This issue can be fixed by introduing Jeff's "language-agnostic" data types.

Even if the binary representations of MPI_INT and MPI_INTEGER are different, an MPI implementation can successfully covert the representations, I believe.

I am sorry I have no idea on the decimal data types at this moment.

ahori commented 7 years ago

I talked with Rolf on this issue at EuroMPI/USA 2017. Here is my understanding. Although many implementations running on widely used machines have no difference on MPI_INT and MPI_INTEGER, for example, the Fortran standard has the clear distinction between Fortran integer and C integer. The same thing happens on floating points. So, as the standard, MPI cannot have the language-agnostic data types.

jeffhammond commented 6 years ago

@ahori I disagree with that conclusion.

The purpose of this ticket was to establish language-agnostic types for MPI that the user can map to the equivalent language type if one exists. In particular, this provides a way to support 128-bit floating-point data in C/C++. While the ISO languages do not support such a type, many compilers do, and this is a way to support that without having to reference compiler extensions in the MPI standard.

Furthermore, as noted above, this provides a way to make MPI better in the context of languages like Python, which are not referenced in the MPI standard but are widely used with MPI via a C interface. Having support for IEEE decimal types allows Python to pass that data through MPI reliably. It also allows an MPI library to implement reductions for decimal types even though neither of our canonical languages (C and Fortran) support them, because MPI libraries can use hardware implementations (in the case of e.g. IBM POWER) or software library implementations of decimal floating-point.

ahori commented 6 years ago

@jeffhammond

As you can see in the ticket #65, I am planning to add MPI_FLOAT16, MPI_FLOAT32, MPI_FLOAT64, and MPI_FLOAT128 (I am sorry, they are not MPI_BINARY*) as well as MPI_SHORT_FLOAT. This is because the C/C++ standard will encourage (?) to have (similar to ). I think the decimal numbers are not supported by C, C++ and Fortran, and it is impossible to implement them in the current MPI implementations having only C and Fortran bindings.

Oh, I found gcc supports the decimal types, sorry.