Originally by RolfRabenseifner on 2010-09-01 11:53:06 -0500
(This ticket currently contains all tickets #A - #X for printing purpose)
229-A: Overview over all related tickets## Description
This ticket gives an overview over all related new MPI-3 Fortran tickets.
It is intended that they are independent or that the require-relation
is only uni-directional.
Therefore they have independent voting.
Ticket #229-A - this Overview over all related tickets
Ticket #230-B - New module "USE mpi_f08"
Ticket #231-C - Fortran compile-time argument checking with individual handles
Ticket #233-E - The use of 'mpif.h' is strongly discouraged
Ticket #234-F - Choice buffers through TYPE(*) DIMENSION(..) declarations
Ticket #235-G - Corrections to "Problems with Fortran Bindings" (MPI-2.2 p.481) and "Problems Due to Strong Typing" (p.482)
Ticket #236-H - Corrections to "Problems Due to Data Copying and Sequence Association" (MPI-2.2 page 482)
Ticket #237-I - Corrections to "Problems Due to Fortran 90 Derived Types" (MPI-2.2 page 484)
Ticket #238-J - Corrections to "Registers and Compiler Optimizations" (MPI-2.2 page 371) and "A Problem with Register Optimization" (page 485)
Ticket #239-K - IERROR optional
Ticket #240-L - New syntax used in all three (mpif.h, mpi, mpi_f08)
Ticket #241-M - Not including old deprecated routines from MPI-2.0 - MPI-2.2
Ticket #242-N - Arguments with INTENT=IN, OUT, INOUT
Ticket #243-O - Status as MPI_Status Fortran derived type
Ticket #244-P - MPI_STATUS(ES)_IGNORE and MPI_ERRCODES_IGNORE through function overloading
Ticket #245-Q - MPI_ALLOC_MEM and Fortran
Ticket #246-R - Upper and lower case letters in new Fortran bindings
Ticket #247-S - All new Fortran 2008 bindings - Part 1
Ticket #248-T - All new Fortran 2008 bindings - Part 2
Ticket #249-U - Alternative formulation for Section 16.2 Fortran Support
Ticket #250-V - Minor Corrections in Fortran Interfaces
Ticket #252-W - Substituting dummy argument name "type" by "datatype" or "oldtype", and others
Ticket #253-X - mpi_f08 Interfaces for new MPI-3.0 routines
All these tickets are owned by Rolf Rabenseifner, Craig Rasmussen, and Jeff Squyres together.
Extended Scope
None.
History
Nomenclature:
Implicit interface = Fortran interface without compile-time argument checking.[[BR]]
This was already available in Fortran 77.
Explicit interface = Fortran interface with compile-time argument checking.[[BR]]
This requires at least Fortran 90.
Both interfaces are independent of the input style (fixed format or free format).
MPI-1.0 - MPI-1.3 were based on Fortran 77 interfaces.
In MPI-2.0 - MPI-2.2, all interfaces are "Fortran" interfaces.
In many cases, new Fortran 90 methods (e.g., KIND=....) are used.
Buffers are defined as <type> BUF(*), which isn't a Fortran notation.
The behavior is defined through the usage in implicit interfaces
(old Fortran 77 style subroutine definitions).
All handles are defined as INTEGER.
MPI libraries are allowed to enable compile-time argument checking of MPI applications,
as long as the application behaves as with implicit interfaces (i.e.,
no compile-time argument checking).
To enable compile-time argument checking, current MPI libraries use special
non-standard options for the buffer arguments.
Another problem area is the handling of Fortran optimization
together with nonblocking MPI routines.
Proposed Solution
Major goals of the New MPI-3 Fortran support are:
Enabling full compile-time argument checking.
Enhanced compile-time argument checking quality through the use of individually typed handles.
New solutions for the optimization problem in conjunction with MPI nonblocking routines.
Correction of wrong hints in the current MPI-2.2.
To achieve a high compile-time argument checking quality together with acceptable backward compatibility,
the new features require the use of a new USE mpi_f08 module.
Parts of the features are also included in existing USE mpi.
Old style include 'mpif.h' is kept and can continues to offer
an old style interface with (old Fortran 77) implicit interfaces,
but the use of 'mpif.h' is strongly discouraged.
Impact on Implementations
The C-based backend of MPI routines with buffers is doubled.
[[BR]]
Routines without buffer arguments can use the same interface for
the existing INCLUDE 'mpif.h' and USE mpi and the new
USE mpi_f08.
Impact on Applications / Users
None, as long as the application uses already an MPI library with compile-time argument checking.
Applications that are not consistent with compile-time argument checking may require some
bug corrections. Those application bugs are semantically correct programs,
but syntactically wrong according to the definition of MPI.
If an application programmer does not resolve those application bugs,
he/she is still able to switch to include 'mpif.h
and to postpone the fixing of his/her application bugs.
Alternative Solutions
See inside of ticket descriptions.
Entry for the Change Log
See inside of ticket descriptions.
230-B: New module "USE mpi_f08"See Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Description
-Major decisions in this ticket:*
Full compile-time argument checking with this new mpi_f08 module.
A new mpi_f08 module for all new features to keep compatibility
for existing Fortran interface with existing mpi module.
Transition from a history-based wording (Basic & Extended Fortran Support)
to a consistent description of the Fortran support in MPI-3.
Further details are handled in further tickets:
Individual handle types for handle arguments, variables, and constants, see Ticket #231-C.
Void * arguments are presented by TYPE(*) DIMENSION(..) .
With this, non-contiguous sub-arrays can now be used with nonblocking routines.
This feature requires Fortran 2008 support, details see Ticket #234-F.
All MPI routines will have a new Fortran 2008 function specification
(The existing Fortran function specifications meet "INCLUDE mpif.h" and "USE mpi"),
for details see Ticket #247-S.
Solving problems with non-blocking MPI operations sub-arrays and registers
through usage of keyword "ASYNCHRONOUS", see Ticket #238-J.
Extended Scope
None.
History
Current MPI-2.2 requires that mpif.h contains full MPI-2.2
because in the Extended Fortran Support,
the standard requires (MPI-2.2, page 489, line 7):
Applications may use either the mpi module or the mpif.h include file.
Proposed Solution
''The ticket numbers in parenthesis (#xxx-X) indicate sentences that are removed if the appropriate
ticket is not voted in.''
MPI is not a language, and all MPI operations are expressed as functions,
subroutines, or methods, according to the appropriate language bindings, which for C,
C++, Fortran-77, and Fortran-95, are part of the MPI standard.
-but should read*
MPI is not a language, and all MPI operations are expressed as functions,
subroutines, or methods, according to the appropriate language bindings, which for C,
C++, Fortran-77, and Fortran-95,and Fortran, are part of the MPI standard.
-MPI-2.2, Chapter 1, Introduction, Page 2, line 1 reads*
Allow convenient C, C++, Fortran-77, and Fortran-95 bindings for the interface.
-but should read*
Allow convenient C, C++, Fortran-77, and Fortran-95,and Fortran bindings for the interface.
-MPI-2.2, Chapter 1, Introduction, Page 4, line 34: Add in the new section "1.6 Background of MPI-3.0":*
A new Fortran mpi_f08 module is introduced to provide extended compile-time argument checking and
buffer handling in nonblocking routines.
The existing mpi module provides compile-time argument checking on the basis
of existing MPI-2.2 routine definitions(#232-D)**.
The use of mpif.h is strongly discouraged(#233-E)**.
-MPI-2.2, Chapter 2, Terms and Convention, Page 9, line 18 reads*
Some of the major areas of difference are the naming conventions, some semantic definitions,
file objects, Fortran 90 vs Fortran 77, C++, processes, and interaction with signals.
All MPI functions are first specified in the language-independent notation. Immediately
below this, the ISO C version of the function is shown followed by a version of the same
function in Fortran
and then the C++ binding.
Fortran in this document refers to Fortran 90; see Section 2.6.
-but should read*
All MPI functions are first specified in the language-independent notation. Immediately
below this, language dependent bindings follow:
The ISO C version of the function.
The Fortran version of the same
function used with USE mpi or INCLUDE 'mpif.h'.
The Fortran version used with USE mpi_f08.
The C++ binding (which is deprecated). [[BR]]
Fortran in this document refers to Fortran 90 and higher; see Section 2.6.
-MPI-2.2, Chapter 2, Terms and Conventions, Section 2.6.2 Fortran Binding Issues, page 18, line 6-9 reads*
The MPI Fortran binding is inconsistent with the Fortran 90 standard in several respects.
These inconsistencies, such as register optimization problems, have implications for
user codes that are discussed in detail in Section 16.2.2.
They are also inconsistent with Fortran 77.
-but should read*
The MPI Fortran bindings areis inconsistent with the Fortran 90 standard in several respects.
These inconsistencies, such as register optimization problems, have implications for
user codes that are discussed in detail in Section 16.2.2.
They are also inconsistent with Fortran 77.
In Fortran, the corresponding integer is an integer of kind MPI_OFFSET_KIND, defined
in mpif.h and the mpi module.
-but should read*
In Fortran, the corresponding integer is an integer withof kind parameter MPI_OFFSET_KIND,
which is defined
in mpif.h,and the mpi module__, and the mpi_f08 module__.
The Fortran MPI-2 language bindings have been designed to be compatible with the Fortran
90 standard (and later). These bindings are in most cases compatible with Fortran 77,
implicit-style interfaces.
-Rationale.* Fortran 90 contains numerous features designed to make it a more "modern"
language than Fortran 77. It seems natural that MPI should be able to take
advantage of these new features with a set of bindings tailored to Fortran 90. MPI
does not (yet) use many of these features because of a number of technical difficulties.
-(End of rationale.)*
MPI defines two levels of Fortran support, described in Sections 16.2.3 and 16.2.4. In
the rest of this section, "Fortran" and "Fortran 90" shall refer to "Fortran 90" and its
successors, unless qualified.
Basic Fortran Support An implementation with this level of Fortran support provides
the original Fortran bindings specified in MPI-1, with small additional requirements
specified in Section 16.2.3.
Extended Fortran Support An implementation with this level of Fortran support
provides Basic Fortran Support plus additional features that specifically support
Fortran 90, as described in Section 16.2.4.
A compliant MPI-2 implementation providing a Fortran interface must provide Extended
Fortran Support unless the target compiler does not support modules or KIND-
parameterized types.
-together with MPI-2.2, page 488, lines 19-24*
A new set of functions to provide additional support for Fortran intrinsic numeric
types, including parameterized types: MPI_SIZEOF, MPI_TYPE_MATCH_SIZE,
MPI_TYPE_CREATE_F90_INTEGER, MPI_TYPE_CREATE_F90_REAL and
MPI_TYPE_CREATE_F90_COMPLEX. Parameterized types are Fortran intrinsic types
which are specified using KIND type parameters. These routines are described in detail
in Section 16.2.5.
-together with MPI-2.2, page 489, lines 7-14*
Applications may use either the mpi module or the mpif.h include file. An implementation
may require use of the module to prevent type mismatch errors (see below).
-Advice to users.* It is recommended to use the mpi module even if it is not necessary to
use it to avoid type mismatch errors on a particular system. Using a module provides
several potential advantages over using an include file. *(End of advice to users.)*
It must be possible to link together routines some of which USE mpi and others of which
INCLUDE mpif.h.
-but should read (TODO: check whether "the only new language feature" is true)*
The Fortran MPI-2 language bindings have been designed to be generally compatible with the Fortran
90 standard (and later). ~~These bindings are in most cases compatible with Fortran 77,
implicit-style interfaces.~~
-Rationale.* Fortran 90 contains numerous features designed to make it a more "modern"
language than Fortran 77. It seems natural that MPI should be able to take
advantage of these new features with a set of bindings tailored to Fortran 90.
~~MPI does not (yet) use many of these features because of a number of technical difficulties.~~
__In Fortran 2008, the only new language features used, are of assumed type and assumed rank was defined to
allow the definition of choice arguments as part of the Fortran language.__
-(End of rationale.)*
MPI defines two levelsthree methods of Fortran support:
~~, described in Sections 16.2.3 and 16.2.4. In
the rest of this section, "Fortran" and "Fortran 90" shall refer to "Fortran 90" and its
successors, unless qualified.~~
1. __**`INCLUDE 'mpif.h'`**__ ~~**Basic Fortran Support**~~
__This method is described__
~~An implementation with this level of Fortran support provides
the original Fortran bindings specified in MPI-1, with small additional requirements
specified~~
in Section 16.2.3.
__The use of the include file `mpif.h` is strongly discouraged since MPI-3.0 **(#233-E)**.__
2. __**`USE mpi`**__ ~~**Extended Fortran Support**~~
__This method is described__
~~An implementation with this level of Fortran support
provides Basic Fortran Support plus additional features that specifically support
Fortran 90, as described~~
in Section 16.2.4 __and requires compile-time compile-time argument checking__.
3. __**`USE mpi_f08`** This method is described in Section 16.2.5
and requires compile-time compile-time argument checking that includes also unique handle types.__
Application subroutines and functions may use either one of the mpi modules or the mpif.h include file. An implementation
may require use of one of the modules to prevent type mismatch errors~~ (see below)~~.
-Advice to users.* It is recommended to use __one of__ the ~~mpi~~__MPI__ module__s__ even
if it is not necessary to
use it to avoid type mismatch errors on a particular system. Using a module provides
several potential advantages over using an include file. *(End of advice to users.)*
In a single application, it must be possible to link together routines some of which USE mpi and others of which
USE mpi_f08 orINCLUDE mpif.h.
The INTEGER compile-time constant MPI_SUBARRAYS is MPI_SUBARRAYS_SUPPORTED if all choice arguments are
defined in explicit interfaces with standardized assumed type and assumed rank, otherwise it equals MPI_SUBARRAYS_UNSUPPORTED.
This constant exists with each Fortran support method, but not in the C/C++ header files.
The value may be different for each Fortran support method. (#234-F)****
Section 16.2.6 describes additional functionality that is part of the Fortran support.This section defines anew set of functions to provide additional support for Fortran intrinsic numeric
types, including parameterized types. The functions are: MPI_SIZEOF, MPI_TYPE_MATCH_SIZE,
MPI_TYPE_CREATE_F90_INTEGER, MPI_TYPE_CREATE_F90_REAL and
MPI_TYPE_CREATE_F90_COMPLEX. Parameterized types are Fortran intrinsic types
which are specified using KIND type parameters.
~~These routines are described in detail
in Section 16.2.5.~~
-MPI-2.2, Section 16.2.3 Basic Fortran Support, page 487, line 43 - page 488, line 4, reads*
16.2.3 Basic Fortran Support
Because Fortran 90 is (for all practical purposes) a superset of Fortran 77, Fortran 90
(and future) programs can use the original Fortran interface. The following additional
requirements are added:
Implementations are required to provide the file mpif.h, as described in the original
MPI-1 specification.
mpif.h must be valid and equivalent for both fixed- and free- source form.
-but should read *
16.2.3 Basic Fortran Support Through the mpif.h Include File
The use of the mpif.h header file is strongly discouraged (#233-E).
Because Fortran 90 is (for all practical purposes) a superset of Fortran 77, Fortran 90
(and future) programs can use the original Fortran interface.
The Fortran bindings are compatible with Fortran 77
implicit-style interfaces in most cases.The following additional requirements are added:The include file mpif.h must:
~~1. Implementations are required to provide the file mpif.h, as described in the original
MPI-1 specification.~~
Define all named MPI constants.
Declare MPI functions that return a value.
Define all handles as INTEGER. This is reflected in the first of the two
Fortran interfaces in each MPI function definition.
2. mpif.h must Be valid and equivalent for both fixed- and free- source form.
For each MPI routine, an implementation can choose to use an implicit or explicit interface.
Implementations with Extended Fortran support must provide:
1. An mpi module
2. A new set of functions to provide additional support for Fortran intrinsic numeric
types, including parameterized types: MPI_SIZEOF, MPI_TYPE_MATCH_SIZE,
MPI_TYPE_CREATE_F90_INTEGER, MPI_TYPE_CREATE_F90_REAL and
MPI_TYPE_CREATE_F90_COMPLEX. Parameterized types are Fortran intrinsic types
which are specified using KIND type parameters. These routines are described in detail
in Section 16.2.5.
Additionally, high-quality implementations should provide a mechanism to prevent fatal
type mismatch errors for MPI routines with choice arguments.
The mpi Module
An MPI implementation must provide a module named mpi that can be used in a Fortran
90 program. This module must:
Define all named MPI constants.
Declare MPI functions that return a value.
An MPI implementation may provide in the mpi module other features that enhance
the usability of MPI while maintaining adherence to the standard. For example, it may:
Provide interfaces for all or for a subset of MPI routines.
Provide INTENT information in these interface blocks.
-but should read*
16.2.4 Extended Fortran Support Through the mpi Module
Implementations with Extended Fortran support must provide:
~~1. An mpi module~~[[BR]]
~~2. A new set of functions to provide additional support for Fortran intrinsic numeric
types, including parameterized types: MPI_SIZEOF, MPI_TYPE_MATCH_SIZE,
MPI_TYPE_CREATE_F90_INTEGER, MPI_TYPE_CREATE_F90_REAL and
MPI_TYPE_CREATE_F90_COMPLEX. Parameterized types are Fortran intrinsic types
which are specified using KIND type parameters. These routines are described in detail
in Section 16.2.5.~~
~~Additionally, high-quality implementations should provide a mechanism to prevent fatal
type mismatch errors for MPI routines with choice arguments.~~
The mpi Module
An MPI implementation must provide a module named mpi that can be used in a Fortran
90 program. This module must:
Define all named MPI constants
Declare MPI functions that return a value.
Provide explicit interfaces for all MPI routines,
i.e., this module guarantees compile-time argument checking,
and allows positional and keyword-based argument lists. (#232-D)****
Define all handles as INTEGER. This is refelcted in the first of the two
Fortran interfaces in each MPI function definition.
An MPI implementation may provide other features in the mpi module other features that enhance
the usability of MPI while maintaining adherence to the standard. For example, it may:provide INTENT information in these interface blocks.
Provide interfaces for all or for a subset of MPI routines.(#232-D)
Provide INTENT information in these interface blocks.
'''MPI-2.2, Section 16.2.4, page 489, lines 7-14 are removed
(they have been already used in Section 16.2.1)'''
-After MPI-2.2, Section 16.2.4, page 489, line 30, the following section is added*
(for better readability of this ticket, the following new text is not underlined although it should):
16.2.5 Fortran Support Through the mpi_f08 Module
An MPI implementation must provide a module named mpi_f08 that can be used in a Fortran
program.
With this module, new Fortran definitions are added for each MPI routine (#247-S),
except for routines that are deprecated (#241-M).
This module must:
Define all named MPI constants.
Declare MPI functions that return a value.
Provide explicit interfaces for all MPI routines,
i.e., this module guarantees compile-time argument checking.
Define all handles with uniquely named handle types
(instead of INTEGER handles in the mpi module).
This is reflected in the second of the two Fortran interfaces in each MPI function definition.
-(#231-C)*
Set the INTEGER compile-time constant MPI_SUBARRAYS to MPI_SUBARRAYSSUPPORTED and
declare choice buffers with the Fortran 2008 feature assumed-type
and assumed-rank "TYPE(), DIMENSION(..)" if the underlying Fortran
compiler supports it.
With this, non-contiguous sub-arrays are valid also
in nonblocking routines. *_(#234-F)**
Set the MPI_SUBARRAYS compile-time constant to MPI_SUBARRAYS_UNSUPPORTED and
declare choice buffers with a compiler-dependent mechanism that
overrides type checking if the underlying Fortran compiler does not
support the Fortran 2008 assumed-type and assumed-rank notation.
In this case, the use of non-contiguous sub-arrays
in nonblocking calls may be restricted as with the mpi module. (#234-F)
-Advice to implementors.*
In this case, the choice argument may be implemented with an
explicit interface with compiler directives, for example:
Declare each argument with an INTENT=IN, OUT, or INOUT as appropriate (#242-N).
Declare all status and array_of_statuses output
arguments as optional through function overloading,
instead of using MPI_STATUS_IGNORE(#244-P).
Declare all array_of_errcodes output
arguments as optional through function overloading,
instead of using MPI_ERRCODES_IGNORE(#244-P).
Declare all ierror output arguments as optional,
except for user-defined callback functions (e.g., comm_copy_attr_fn) and their
predefined callbacks (e.g., MPI_NULL_COPY_FN). (#239-K)
-Rationale.
For user-defined callback functions (e.g., comm_copy_attr_fn) and their
predefined callbacks (e.g., MPI_NULL_COPY_FN), the ierror argument is not optional,
i.e., these user-defined functions need not to check whether the MPI
library calls these routine with or without an actual ierror output argument.
-(End of rationale.) (#239-K)
'''Renumbering of MPI-2.2, Section 16.2.5 in Section 16.2.6, on page 489, line 31.
Impact on Implementations
This module requires mainly
Module interface due to new MPI-3.0 interface, for details, see Ticket #247-S
For all choice buffer arguments (<type> buf(*)), new MPI datatype handling must
be implemented based on the internal Fortran argument descriptor used with the
"TYPE(*), DIMENSION(..)" declarations, for details see Ticket #234-F.
Impact on Applications / Users
None, as long they do not use this new module.
If they want to use this new mpi_f08 module then they must modify:
Corrections if there are syntactical errors in the application due to strong typing.
Substitute "include 'mpif.h'" or "USE mpi" with "USE mpi_f08"
Substitute all declarations of INTEGER handle variables with the new
TYPE(MPI_Comm), etc. (only if Ticket #231-C is voted in)
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]]
yyy.
231-C: Fortran compile-time argument checking with individual handlesSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
The method that is used to implement compile-time argument checking for handles in new mpi_f08.
-Details:*
In principle, there are 3 different solutions.
There are several problem areas:
Type checking itself: For this goal, a handle must be defined as a Fortran derived type,
i.e., as a structure. In Fortran, only with this feature, named handle types can be implemented.
Minimizing the problems of conversion between different handle language bindings:
C binding
Existing Fortran INTEGER binding
New named Fortran binding based on derived types.
For this several possibilities are seen:
(A) New derived type consist of exactly one MPI_VAL entry that contains the existing INTEGER value.
With this, conversion between old and new Fortran handles are trivial application code:[[BR]]
- Conversion from old to new: old = new%MPI_VAL [[BR]]
- Conversion from new to old: new%MPI_VAL = old [[BR]]
Existing C-Fortran conversion routines can be directly applied to new%MPI_VAL.
(B) The new derived type is allowed to contain additional vendor (MPI library) specific
data.
Conversion from new to old is still trivial (old = new%MPI_VAL), but for the other direction,
a conversion function or subroutine is necessary.
(C) No rules about the content of the handle derived types:
New Conversion routines between old and new Fortran are necessary, and also
between the C handles and the new ones in Fortran.
Minimizing the programming needs inside of C based wrappers:
With an additional SEQUENCE attribute in solution A, the derived type is identical
to one numerical storage unit, and therefore identical to one INTEGER,
i.e., both, the old and new Fortran interface can be implemented with the same C code.
Optimization
Here, Solution B has an advantage in the case of different handle values in C
and Fortran. With B, the C value can be stored additionally in the handle.
Both values (Fortran and C) are loaded together (due to cache line) and therefore
the number of memory accesses can be reduced by one.
In principle, same speed may be achieved when a common base is used for a
handle storage and integer numbers are used in Fortran and C.
Based on the advantages and disadvantages shown above, the solution is based on A.
Extended Scope
None.
History
Proposed Solution
-Rule about editing:* [[BR]]
For the new Fortran handle types, one should use, e.g.,
In Fortran, all handles have type INTEGER.
In C and C++, a different handle type is
defined for each category of objects.
In addition, handles themselves are distinct objects
in C++. The C and C++ types must support the use of the assignment and equality operators.
-but should read*
In Fortran__ with USE mpi or INCLUDE 'mpif.h', all handles have type INTEGER.
In Fortran with USE mpi_f08, and in C and C++, a different handle type is
defined for each category of objects.
With Fortran USE mpi_f08, the handles are defined as Fortran sequenced derived types
that consist of only one element INTEGER :: MPI_VAL. The internal handle value is identical
to the Fortran INTEGER value used in the mpi module and in mpif.h.
The names are identical to the names in C, except that they are not case sensitive.
For example:
\cdeclindex{MPI\_Comm}
TYPE MPI_Comm
SEQUENCE
INTEGER :: MPI_VAL
END TYPE MPI_Comm
__
In addition, handles themselves are distinct objects
in C++. The C and C++ types must support the use of the assignment and equality operators.
-Same section, after the Advice to implementers, MPI-2.2, page 13, line 4 add:*
**Rationale.
Due to the sequence attribute in the definition of handles in the mpi_f08 module,
the new Fortran handles are associated with one numerical storage unit,
i.e., they have the same C binding as the INTEGER handles of the mpi module.
Due to the equivalence of the integer values, applications can easily
convert MPI handles between all three supported Fortran methods. For
example, an integer communicator handle COMM can be converted directly
into an exactly equivalent mpi_f08 communicator handle named comm_f08
by comm_f08%MPI_VAL=COMM, and vice versa.
-(End of rationale.)***
-MPI-2.2, Chapter 2, Terms and Conventions, Section 2.6.2 Fortran Binding Issues, page 18, line 3 reads*
Handles are represented in Fortran as INTEGERs.
-but should read*
Handles are represented in Fortran as INTEGERs__,
or with the mpi_f08 module as a derived type,
see MPI-2.2, Section 2.5.1 on page 12__.
-MPI-2.2, Chapter 9, The Info Object, page 299, lines 14-15 read*
Many of the routines in MPI take an argument info.
info is an opaque object with a handle
of type MPI_Info in C, MPI::Info in C++,
and INTEGER in Fortran.
-but should read*
Many of the routines in MPI take an argument info.
info is an opaque object with a handle
of type MPI_Info in C__ and Fortran with the mpi_f08 module, MPI::Info in C++,
and INTEGER in Fortran with the mpi module or the include file mpif.h__.
'''MPI-2.2, Section 10.3.2 Starting Processes and Establishing Communication,
in the explanation of the argument list of MPI_COMM_SPAWN, MPI-2.2, page 311, lines 39-40 read'''
The info argument The info argument to all of the routines in this chapter is an opaque
handle of type MPI_Info in C, MPI::Info in C++
and INTEGER in Fortran.
-but should read*
The info argument The info argument to all of the routines in this chapter is an opaque
handle of type MPI_Info in C__ and Fortran with the mpi_f08 module, MPI::Info in C++
and INTEGER in Fortran with the mpi module or the include file mpif.h__.
-MPI-2.2, Section 16.3.4 Transfer of Handles, page 499, lines 1-2 read*
The type definition MPI_Fint is provided in C/C++ for an integer of the size that
matches a Fortran INTEGER; usually, MPI_Fint will be equivalent to int.
-but should read*
The type definition MPI_Fint is provided in C/C++ for an integer of the size that
matches a Fortran INTEGER; usually, MPI_Fint will be equivalent to int.
With the Fortran mpi module or the mpif.h include file,
a Fortran handle is a Fortran INTEGER value
that can be used in the following conversion functions.
With the Fortran mpi_f08 module,
a Fortran handle is a derived tpye that contains the Fortran INTEGER field MPI_VAL,
which contains the the INTEGER value
that can be used in the following conversion functions.
-Appendix A.1.1 Defined Constants:*
MPI-2.2, page 515, line 32: INTEGER --> INTEGER or TYPE(MPI_Errhandler)
MPI-2.2, page 516, line 4: INTEGER --> INTEGER or TYPE(MPI_Datatype)
MPI-2.2, page 517, line 4: INTEGER --> INTEGER or TYPE(MPI_Datatype)
MPI-2.2, page 517, line 27: INTEGER --> INTEGER or TYPE(MPI_Datatype)
MPI-2.2, page 518, line 3: INTEGER --> INTEGER or TYPE(MPI_Datatype)
MPI-2.2, page 518, line 14: INTEGER --> INTEGER or TYPE(MPI_Datatype)
MPI-2.2, page 518, line 22: INTEGER --> INTEGER or TYPE(MPI_Datatype)
MPI-2.2, page 518, line 29: INTEGER --> INTEGER or TYPE(MPI_Comm)
MPI-2.2, page 519, line 3: INTEGER --> INTEGER or TYPE(MPI_Op)
MPI-2.2, page 519, line 23: INTEGER --> INTEGER or TYPE(MPI_Group)
MPI-2.2, page 519, line 25: INTEGER --> INTEGER or TYPE(MPI_Comm)
MPI-2.2, page 519, line 27: INTEGER --> INTEGER or TYPE(MPI_Datatype)
MPI-2.2, page 519, line 29: INTEGER --> INTEGER or TYPE(MPI_Request)
MPI-2.2, page 519, line 31: INTEGER --> INTEGER or TYPE(MPI_Op)
MPI-2.2, page 519, line 33: INTEGER --> INTEGER or TYPE(MPI_Errhandler)
MPI-2.2, page 519, line 35: INTEGER --> INTEGER or TYPE(MPI_File)
MPI-2.2, page 519, line 37: INTEGER --> INTEGER or TYPE(MPI_Info)
MPI-2.2, page 519, line 39: INTEGER --> INTEGER or TYPE(MPI_Win)
MPI-2.2, page 519, line 47: INTEGER --> INTEGER or TYPE(MPI_Group)
MPI-2.2, page 523, line 32: INTEGER, DIMENSION(MPI_STATUSSIZE,) --> INTEGER, DIMENSION(MPI_STATUSSIZE,) or TYPE(MPI_Status), DIMESION(*)
MPI-2.2, page 523, line 34: INTEGER, DIMENSION(MPI_STATUS_SIZE) --> INTEGER, DIMENSION(MPI_STATUS_SIZE) or TYPE(MPI_Status)
Impact on Implementations
Nearly none, because the same wrappers can be used for the old and the new module
(because the C binding of the new and old handles are identical).
Impact on Applications / Users
None, as long they do not use the new mpi_f08 module.
If they want to use this new mpi_f08 module then they must modify:
Substitute all declarations of INTEGER handle variables by the new
TYPE(MPI_Comm), etc.
Further items can be found in Ticket #230-B.
Alternative Solutions
See description of this ticket.
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]]
yyy.
232-D: Existing module "USE mpi" with compile-time argument checkingSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Compile-time argument checking will be also mandatory for mpi module.
-Details:*
It is now required that "USE mpi" guarantees compile-time argument checking.
Choice arguments (i.e., the buffers) may be handled without compile-time argument checking
through a simple call by reference or in-and-out-copy in case of
non-contiguous sub-arrays.
MPI Handles are still Fortran INTEGER.
No Type Mismatch Problems for Subroutines with Choice Arguments
A high-quality MPI implementation should provide a mechanism to ensure that MPI choice
arguments do not cause fatal compile-time or run-time errors due to type mismatch. An
MPI implementation may require applications to use the mpi module, or require that it be
compiled with a particular compiler
flag, in order to avoid type mismatch problems.
-Advice to implementors.* In the case where the compiler does not generate errors,
nothing needs to be done to the existing interface. In the case where the compiler
may generate errors, a set of overloaded functions may be used. See the paper of M.
Hennecke [26]. Even if the compiler does not generate errors, explicit interfaces for
all routines would be useful for detecting errors in the argument list. Also, explicit
interfaces which give INTENT information can reduce the amount of copying for BUF(*)
arguments. *(End of advice to implementors.)*
-but should read*
-Advice to implementors. *
In the `mpi` module with some compilers, a choice argument can be implemented with the
following explicit interface:
` !DEC$ ATTRIBUTES NO_ARG_CHECK :: BUF `[[BR]]
` !$PRAGMA IGNORE_TKR BUF `[[BR]]
` REAL, DIMENSION(*) :: BUF `
In this case, the compile-time constant MPI_SUBARRAYS equals
MPI_SUBARRAYS_UNSUPPORTED **[provided Ticket #234-F]**.
It is explicitly allowed that the choice arguments are implemented
in the same way as with the `mpi_f08` module.
In the case where the compiler does not provide such functionality,
a set of overloaded functions may be used. See the paper of M.
Hennecke [26].
-(End of advice to implementors.)*
Impact on Implementations
The mpi module must be implemented with explicit subroutine interfaces
for all MPI routines.
This can be implemented with most Fortran compilers with the following method:
Use of the interfaces MPI_... interfaces defined in Appendix A.3
Concatenation of all lines with their next ones if they end with a comma.
In MPI_SIZEOF, the buffer X is only a single variable, i.e., DIMENSION(*) is omitted.
The last two interfaces in Section A.3.14 are not used, because they are
deprecated prototype definitions.
gfortran: '''(TODO: Craig Rasmussen may explain the current status of implementing
this option in gfortran.)'''
-TODO* Rolf Rabenseifner has a freely usable interface that is directly copied from MPI-2.2.
Impact on Applications / Users
None, as long as the user program is syntactically correct.
Current MPI-2.2 already allows compile-time argument checking, therefore portable user programs
must be syntactically correct.
Users may need to correct syntactically wrong programs if their current MPI-2.2
library has not yet implemented explicit interfaces with compile-time argument checking.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]]
yyy.
233-E: Deprecating INCLUDE 'mpif.h'See Ticket #229-A for an overview on the New MPI-3 Fortran Support.
There isn't any significant further need of mpif.h.
It can be easily substituted by the mpi module as long as the application
uses the MPI interface correctly, because the mpi module
may (with MPI-2.2), or
must (with MPI-3.0 when Ticket #232-D passes)
fulfill compile-time argument checking.
Known problems (only one in the moment):
If a user calls a routine with an actual argument that is a variable
and MPI requires an array with one element, then implicit interfaces work
correctly, but compile-time argument checking with explicit interfaces cause reporting of an application bug.
Such semantically correct and syntactically incorrect programs must be fixed
before a user can switch from "INCLUDE mpif.h" to "USE mpi".
Extended Scope
None.
History
Proposed Solution
-As already mentioned in Ticket #230-B, in the new section "1.6 Background of MPI-3.0":*:
The Fortran include file mpif.h is deprecated (#233-E).
-As already mentioned in ticket #230-B, MPI-2.2, page 480, Section 16.2.1, line 37-39 are substituted by*
1. __**`INCLUDE 'mpif.h'`**__ ~~**Basic Fortran Support**~~
__This method is described__
~~An implementation with this level of Fortran support provides
the original Fortran bindings specified in MPI-1, with small additional requirements
specified~~
in Section 16.2.3.
__The use of the include file `mpif.h` is strongly discouraged since MPI-3.0.__
-As already mentioned in ticket #230-B, MPI-2.2, Section 16.2.3, page 487, line 44 is substituted by*
16.2.3 Basic Fortran Support Through the mpif.h Include-file
The mpif.h header file is deprecated.
Impact on Implementations
None.
Impact on Applications / Users
User may need to switch to the mpi module due to
user-specific rules that require that only features are used
that are not in the category "use is strongly discouraged".
This requires that
"INCLUDE mpif.h" is substituted by "USE mpi"
Due to compile-time argument checking detected bugs in the user's software must be fixed.
Some semantically correct arguments may need corrections due to syntax requirements,
see above in the description.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]]
yyy.
234-F: Choice buffers through "TYPE(_), DIMENSION(..)" declarationsSee Ticket #229-A for an _overview* on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: 13 yes, 0 no, 2 abstain.[[BR]]
Voting was under the assumption that "TYPE(*), DIMENSION(..)"
will have Fortran standard quality.
Description
-Major decisions in this ticket:*
Choice arguments, i.e., all buffers, are now declared and implemented through the
new Fortran 2008 feature "TYPE(*), DIMENSION(..)".
This requires implementation effort, because non-contiguous sub-arrays are now
handled correctly, i.e., can be used also in nonblocking routines.
A new integer compile-time constant MPI_SUBARRAYS reports
independently in all three Fortran support methods
whether copying-free call by reference is implemented for choice arguments, i.e.,
non-contiguous sub-arrays can be used in nonblocking routines.
-Details:*
Fortran 2008 will provide assumed type and assumed rank declarations for arguments, i.e.,
TYPE(*), DIMENSION(..).
INTERFACE
SUBROUTINE MPI_Xxx(buf, ....) &
& BIND(C,NAME='mpi_xxx_f_to_c')
USE, INTRINSIC :: ISO_C_BINDING
TYPE(*), DIMENSION(..) :: buf
END SUBROUTINE MPI_Xxx
END INTERFACE
a wrapper mpi_xxx_f_to_c (implemented in C or Fortran) is called and buf is passed as a pointer to a
-Fortran descriptor* as described in
http://www.j3-fortran.org/doc/year/08/08-305.txt or later.
Required by "USE mpi_f08"
Optional with "USE mpi"
(Optional with "mpif.h", because compile-time argument checking was never forbidden with mpif.h)
Because compilers may implement this interface late, and because only the use
of this or compatible methods allow that non-contiguous sub-arrays can be
handled correctly by nonblocking routines,
a new compile-time constant is introduced that this quality-issue can be checked by the
application.
If the value is false, then the application may copy these non-contiguous buffers in
contiguous scratch buffers that are not freed before the matching MPI_Wait returns.
Extended Scope
None.
History
The Fortran standardization body strongly works on this topic to provide
a solution that explicit interfaces can be provided for all MPI routines
including all choice arguments.
A positive side effect is that the problems with strided arrays and nonblocking routines
can also vanish. For this an implementation effort is necessary.
MPI functions sometimes use arguments with a choice (or union) data type. Distinct calls
to the same routine may pass by reference actual arguments of different types. The mechanism
for providing such arguments will differ from language to language.
For Fortran, the
document uses to represent a choice variable;
for C and C++, we use void *.
-but should read*
MPI functions sometimes use arguments with a choice (or union) data type. Distinct calls
to the same routine may pass by reference actual arguments of different types. The mechanism
for providing such arguments will differ from language to language.
For Fortran with the include file mpif.h or the mpi module, the
document uses to represent a choice variable;
*with the Fortran mpi_f08 module, such arguments are declared with the
Fortran 2008 syntax `TYPE(), DIMENSION(..)`;*
for C and C++, we use void .
**Advice to implementors.
The implementor can freely choose how to implement choice arguments in the mpi module,
e.g., with a non-standard compiler-dependent method that has
the quality of the call mechanism in the implicit Fortran
interfaces, or with the method defined for the mpi_f08 module.
-(End of advice to implementors.)***
-MPI-2.2, Chapter 2, Terms and Conventions, Section 2.6 Language Binding, page 16, lines 21-22 read*
MPI bindings are for Fortran 90, though they are designed
to be usable in Fortran 77 environments.
-but should read*
MPI bindings are for Fortran 90 and later, though they arewere originally designed
to be usable in Fortran 77 environments.
With the mpi_f08 module, the two Fortran 2008 features assumed type and assumed rank
are also required, see MPI-2.2, Section 2.5.5. on page 15.
(Comment: MPI-2.2, Section 2.5.5. contains the new choice method TYPE(*), DIMENSION(..), see above.)
Originally, MPI-1.1 provided bindings for Fortran 77. These bindings are retained, but they
are now interpreted in the context of the Fortran 90 standard. MPI can still be used with
most Fortran 77 compilers, as noted below. When the term Fortran is used it means Fortran 90.
-but should read*
Originally, MPI-1.1 provided bindings for Fortran 77. These bindings are retained, but they
are now interpreted in the context of the Fortran 90 standard. MPI can still be used with
most Fortran 77 compilers, as noted below. When the term Fortran is used it generally means
Fortran 90 and later__; it means Fortran 2008 and later if the mpi_f08 module is used__.
Text related to this ticket but shown in Ticket #230-B:
-In Section 16.2.1 Overview:*
The INTEGER compile-time constant MPI_SUBARRAYS equals MPI_SUBARRAYS_SUPPORTED if all choice arguments are
defined in explicit interfaces with standardized assumed type and assumed rank, otherwise it equals MPI_SUBARRAYS_UNSUPPORTED.
This constant exists with each Fortran support method, but not in the C/C++ header files.
The value may be different for each Fortran support method. (#234-F)****
-_In new Section 16.2.5 Fortran Support Through the mpif08 Module:*
*Set the INTEGER compile-time constant MPI_SUBARRAYS to MPI_SUBARRAYS_SUPPORTED and
declare choice buffers with the Fortran 2008 feature assumed-type
and assumed-rank "TYPE(), DIMENSION(..)" if the underlying Fortran
compiler supports it.
With this, non-contiguous sub-arrays are valid also
in nonblocking routines.(#234-F)**
Set the MPI_SUBARRAYS compile-time constant to MPI_SUBARRAYS_UNSUPPORTED and
declare choice buffers with a compiler-dependent mechanism that
overrides type checking if the underlying Fortran compiler does not
support the Fortran 2008 assumed-type and assumed-rank notation.
In this case, the use of non-contiguous sub-arrays
in nonblocking calls may be restricted as with the mpi module.(#234-F)
Advice to implementors.
In this case, the choice argument may be implemented with an
explicit interface with compiler directives, for example:
Text related to this ticket but shown in Ticket #232-D:
-In Section 16.2.4 Fortran Support through the mpi Module:*
In this case, the compile-time constant MPI_SUBARRAYS equals
MPI_SUBARRAYS_UNSUPPORTED \ (#234-F).**
'''See also Tickets #247-S and #249-U.
Impact on Implementations
This ticket has major impact on existing MPI implementations,
because the handling of choice buffer arguments must be
reimplemented.
It is definitely different from the existing C (void *) interface.
The buffer description is now a combination of the Fortran
sub-array argument handling (i.e., non-contiguous sub-arrays)
through an array descriptor and the MPI derived datatype handles.
The MPI derived datatype handles apply to a virtual
contiguous memory area that is built out of the portions
defined in the Fortran array descriptor.
Impact on Applications / Users
Removal of all restrictions with the usage of Fortran array triplet-subscripts
(e.g., a(1:100:3)) together with MPI nonblocking routines,
but not with vector-subsripts (e.g., a([1,7,8,17,97])).
Alternative Solutions
None.
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]]
yyy.
235-G: Corrections to "Problems with Fortran Bindings" (MPI-2.2 p.481) and "Problems Due to Strong Typing" (p.482)See Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
No votes upto now, because no major decision within this ticket.
Description
-Major decisions in this ticket:*
No decisions, only new wording that is more correct.
-Details:*
The problems due to strong typing are partially solved by the new module mpi_f08.
The hints must therefore now differentiate between the Fortran support methods.
With the scalar versus array problem, the example is modified,
because with choice buffers, the problem is normally solved.
Extended Scope
None.
History
Proposed Solution
-MPI-2.2, Section 16.2.2 Problems With Fortran Bindings for MPI, page 481, lines 11-12 read*
It supersedes and replaces the discussion of Fortran bindings in the
original MPI specification (for Fortran 90, not Fortran 77).
-and should be removed*
~~It supersedes and replaces the discussion of Fortran bindings in the
original MPI specification (for Fortran 90, not Fortran 77).~~
-MPI-2.2, Section 16.2.2 Problems With Fortran Bindings for MPI, page 481, lines 14-15 read*
An MPI subroutine with a choice argument may be called with different argument types.
-but should read*
An MPI subroutine with a choice argument may be called with different argument types.
Using the module mpi_f08, this problem is resolved.
-MPI-2.2, Section 16.2.2, Subsection "Problems Due to Strong Typing", page 482, lines 11-14 read*
All MPI functions with choice arguments associate actual arguments of different Fortran
datatypes with the same dummy argument. This is not allowed by Fortran 77, and in
Fortran 90 is technically only allowed if the function is overloaded with a different function
for each type.
In C, the use of void* formal arguments avoids these problems.
-but should read*
All MPI functions with choice arguments associate actual arguments of different Fortran
datatypes with the same dummy argument. This is not allowed by Fortran 77, and in
Fortran 90 is technically only allowed if the function is overloaded with a different function
for each type.
In C, the use of void* formal arguments avoids these problems.
*Similar to C,
with Fortran 2008 and later together mpi_f08 module, the problem is avoided
by declaring choice arguments with TYPE(), DIMENSION(..),
i.e., as assumed type and assumed rank dummy arguments.**
-MPI-2.2, Section 16.2.2, Subsection "Problems Due to Strong Typing", page 482, lines 15-24 read*
The following code fragment is technically illegal and may generate a compile-time error.
In practice, it is rare for compilers to do more than issue a warning, though there is concern
that Fortran 90 compilers are more likely to return errors.
-but should read*
Using INCLUDE mpif.h, theThe
following code fragment ismight technically illegalbe invalid and may generate a compile-time error.
In practice, it is rare for compilers to do more than issue a warning~~, though there is concern
that Fortran 90 compilers are more likely to return errors~~.
Using the mpi_f08 or mpi module, the problem is usually resolved through
the standardized assume-type and assume-rank declarations of the dummy arguments,
or with non-standard Fortran options preventing type checking for choice arguments.
-MPI-2.2, Section 16.2.2, Subsection "Problems Due to Strong Typing", page 482, lines 25-30 read*
It is also technically illegal in Fortran to pass a scalar actual argument to an array
dummy argument. Thus the following code fragment may generate an error since the buf
argument to MPI_SEND is declared as an assumed size array buf(*).
It is also technically illegalinvalid in Fortran to pass a scalar actual argument to an array
dummy argument. Thus__, when using the module mpi or mpi_f08,
the following code fragment may usually generates an error since the bufdims and periods
arguments to MPI_SENDCART_CREATEisare declared as an assumed size arrays_ buf(*) **INTEGER DIMS() and LOGICAL PERIODS(_)**.
Using the deprecated INCLUDE 'mpif.h', compiler warnings are not expected
except if this include file also uses Fortran explicit interfaces.
-MPI-2.2, Section 16.2.2, Subsection "Problems Due to Strong Typing", page 482, lines 31-38 read*
-Advice to users.* In the event that you run into one of the problems related to type
checking, you may be able to work around it by using a compiler flag, by compiling
separately, or by using an MPI implementation with Extended Fortran Support as described
in Section 16.2.4. An alternative that will usually work with variables local to a
routine but not with arguments to a function or subroutine is to use the EQUIVALENCE
statement to create another variable with a type accepted by the compiler. ''(End of
advice to users.)''
-and should be removed*
~~*Advice to users.* In the event that you run into one of the problems related to type
checking, you may be able to work around it by using a compiler flag, by compiling
separately, or by using an MPI implementation with Extended Fortran Support as described
in Section 16.2.4. An alternative that will usually work with variables local to a
routine but not with arguments to a function or subroutine is to use the EQUIVALENCE
statement to create another variable with a type accepted by the compiler. ''(End of
advice to users.)''~~
Impact on Implementations
None.
Impact on Applications / Users
None.
Alternative Solutions
Entry for the Change Log
None.
236-H: Corrections to "Problems Due to Data Copying and Sequence Association" (MPI-2.2 page 482) See Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
No votes upto now, because no major decision within this ticket.
Description
-Major decisions in this ticket:*
No decision, only new wording to reflect the new methods and the constant MPI_SUBARRAYS
-Details:*
Extended Scope
None.
History
Proposed Solution
-MPI-2.2, Section 16.2.2, Subsection "Problems Due to Data Copying and Sequence Association", page 482, lines 41 - page 484, line 18 reads*
Implicit in MPI is the idea of a contiguous chunk of memory accessible through a linear[[BR]]
...[[BR]]
compiler cannot be used for applications that use memory references across subroutine calls
as in the example above.
-but should read*
If MPI_SUBARRAYS equals MPI_SUBARRAYS_SUPPORTED:
(for better readability of this ticket, the following new text is not underlined although it should)
Choice buffer arguments are declared as TYPE(*), DIMENSION(..).
For example, considering the following code fragment:
In this case, the individual elements s(1), s(6), s(11), etc. are sent
between the start of MPI_ISEND and the end of MPI_WAIT even though the
compiled code may not copy s(1:100:5) to a contiguous temporary
scratch buffer. Instead, the compiled code may pass a descriptor to
MPI_ISEND that allows MPI to operate directly on s(1), s(6), s(11),
..., s(96).
All non-blocking MPI communication functions behave as if the
user-specified elements of choice buffers are copied to a contiguous
scratch buffer in the MPI runtime environment. All datatype
descriptions (in the example above, "3, MPI_REAL") read and store data
from and to this virtual contiguous scratch buffer. Displacements in
MPI derived datatypes are relative to the beginning of this virtual
contiguous scratch buffer. Upon completion of a non-blocking receive
operation (e.g., when MPI_WAIT on a corresponding MPI_Request
returns), it is as if the received data has been copied from the
virtual contiguous scratch buffer back to the non-contiguous
application buffer. In the example above, r(1), r(6), and r(11)
will be filled with the received data when MPI_WAIT returns.
-Advice to implementors.*
The Fortran descriptor for TYPE(), DIMENSION(..) arguments contains enough
information that the MPI library can make a real contiguous copy of
non-contiguous user buffers. Efficient implementations may avoid such additional
memory-to-memory data copying.
-(End of advice to implementors.)
-Rationale.
If MPI_SUBARRAYS equals MPI_SUBARRAYS_SUPPORTED, non-contiguous buffers are handled inside of the MPI library
instead of by the compiled user code. Therefore the scope of scratch buffers can
be from the beginning of a non-blocking operation until the completion of the
operation although beginning and completion are implemented in different routines.
If MPI_SUBARRAYS equals MPISUBARRAYSUNSUPPORTED,
such scratch buffers can be organized only by the compiler
for the duration of the non-blocking call, which is too short for implementing the
whole MPI operation.
-(End of rationale.)
If MPI_SUBARRAYS equals MPISUBARRAYSUNSUPPORTED:
Implicit in MPI is the idea of a contiguous chunk of memory accessible through a linear[[BR]]
...[[BR]]
compiler cannot be used for applications that use memory references across subroutine calls
as in the example above.
Impact on Implementations
None. (This is only a descriptive ticket.)
Impact on Applications / Users
None.
Alternative Solutions
Entry for the Change Log
None.
237-I: Corrections to problems due to "Fortran 90 Derived Types" (MPI-2.2 page 484)See Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
No votes upto now, because no major decision within this ticket.
Description
-Major decisions in this ticket:*
No decision, only new wording that correct this wrong section.
-Details:*
This section is currently wrong.
MPI works correctly with Fortran sequence derived types.
MPI does not work for Fortran non-sequence derived types.
The section must be therefore corrected.
MPI does not explicitly support passing Fortran 90 derived types to choice
dummy arguments.
Indeed, for MPI implementations that provide explicit interfaces through the mpi
module a compiler will reject derived type actual arguments at compile time. Even when no
explicit interfaces are given, users should be aware that Fortran 90 provides no guarantee
of sequence association for derived types or arrays of derived types. For instance, an array
of a derived type consisting of two elements may be implemented as an array of the first
elements followed by an array of the second. Use of the SEQUENCE attribute may help here,
somewhat.
The following code fragment shows one possible way to send a derived type in Fortran.
The example assumes that all data is passed by address.
type mytype
integer i
real x
double precision d
end type mytype
-but should read*
Fortran 90 Derived Types
MPI does not explicitly support passing Fortran 90sequence derived types to choice
dummy arguments, but does not support Fortran non-sequence derived types.
~~Indeed, for MPI implementations that provide explicit interfaces through the mpi
module a compiler will reject derived type actual arguments at compile time. Even when no
explicit interfaces are given, users should be aware that Fortran 90 provides no guarantee
of sequence association for derived types or arrays of derived types. For instance, an array
of a derived type consisting of two elements may be implemented as an array of the first
elements followed by an array of the second. Use of the SEQUENCE attribute may help here,
somewhat.~~
The following code fragment shows one possible type that can be used
to send a sequence derived type in Fortran.
type mytype
SEQUENCE
integer i
real x
double precision d
end type mytype
! unpleasant to send foo%i instead of foo, but it works for scalar
! entities of type mytype
call MPI_SEND(foo%i, 1, newtype, ...)
-but should read* (comment and %i removed)
call MPI_SEND(foo, 1, newtype, ...)
Impact on Implementations
None.
Impact on Applications / Users
The user can learn, that he/she can use Fortran sequence derived types.
This was possible in the past. Only this wrong advice could prevent
users from using this MPI feature.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section 16.2.2 on page 481.[[BR]]
Fortran sequence derived types can be used for buffers.
The section on Fortran derived types was therefore modified.
238-J: Corrections to "Registers and Compiler Optimizations" (MPI-2.2 page 371) and "A Problem with Register Optimization" (page 485)See Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: 6 yes, 0 no, 9 abstain.[[BR]]
With the comment: "or any future method".
Description
-Major decisions in this ticket:*
New problem with "Temporary Memory Modifications"
with nonblocking and one-sided communication, and with split collective I/O operations.
Additional advice about "Problems with MPI and Fortran optimization".
Additional helper routine MPI_F_SYNC_REG to substitute the user-written DD(buf).
New advices with the Fortran ASYNCHRONOUS attribute.
-Details:*
The sections "Registers and Compiler Optimizations" (MPI-2.2 page 371) and
"A Problem with Register Optimization" MPI-2.2, page 485, line 34 - page 487, line 41
are correct, but do not show the total problem:[[BR]]
Especially when overlapping computation and communication, one can
get in trouble due to "temporary memory copies and memory overwriting".
Example:
While receiving in a(100:100) some halo data with MPI_Irecv into the non-contiguous
subarrays a(1,1:100) and (100,1:100), the numerical part of the application
may operate on two nested loops do j=1,100; do i=2,98; a(i,j)=... .
The compiler may implement loop fusion by storing a(1,1:100) and (100,1:100)
into a scratch array, executing the fused loop on a(100,100) and then restoring the
data in a(1,1:100) and (100,1:100) from the scratch buffer.
The advice in MPI-2.2, Chapter 11, One-sided communications, Section 11.7.3 Registers and Compiler Optimizations, page 372, lines 1-9
does not fully fit to the advice in section "A Problem with Register Optimization" MPI-2.2, page 485ff.
Therefore, it should be slightly modified.
Citing from the Fortran 2008 standard:
5.3.4 ASYNCHRONOUS attribute
An entity with the ASYNCHRONOUS attribute is a variable that may be subject to asynchronous input/output.
The base object of a variable shall have the ASYNCHRONOUS attribute in a scoping unit if
the variable appears in an executable statement or specification expression in that scoping unit and
any statement of the scoping unit is executed while the variable is a pending I/O storage sequence affector (9.6.2.5).
5.3.17 TARGET attribute
The TARGET attribute specifies that a data object may have a pointer associated with it (7.2.2).
An object without the TARGET attribute shall not have a pointer associated with it.
5.3.19 VOLATILE attribute
The VOLATILE attribute specifies that an object may be referenced, defined, or become undefined, by means
not specified by the program. A pointer with the VOLATILE attribute may additionally have its association
status, dynamic type and type parameters, and array bounds changed by means not specified by the program.
An allocatable object with the VOLATILE attribute may additionally have its allocation status, dynamic type
and type parameters, and array bounds changed by means not specified by the program.
MPI implementations will avoid this problem for standard conforming C programs.
Many Fortran compilers will avoid this problem, without disabling compiler optimizations.
However, in order to avoid register coherence problems in a completely portable manner,
users should restrict their use of RMA windows to variables stored in COMMON blocks,
or to variables that were declared VOLATILE
(while VOLATILE is not a standard Fortran declaration, it is supported by many Fortran compilers).
Details and an additional solution are
discussed in Section 16.2.2, "A Problem with Register Optimization," on page 485. See also,
"Problems Due to Data Copying and Sequence Association," on page 482, for additional
Fortran problems.
-but should read*
MPI implementations will avoid this problem for standard conforming C programs.
Many Fortran compilers will avoid this problem, without disabling compiler optimizations.
However, in order to avoid register coherence problems in a completely portable manner,
users should restrict their use of RMA windows to variables stored in modules or in COMMON blocks,
or to variables that were declared VOLATILE
(but this attribute may inhibit optimization of any code containing the RMA window)(while VOLATILE is not a standard Fortran declaration, it is supported by many Fortran compilers).
Further dDetails and an additional solutions are
discussed in Section 16.2.2, "A Problem with Register Optimization," on page 485. See also,
"Problems Due to Data Copying and Sequence Association," on page 482, for additional
Fortran problems.
'''MPI-2.2, Section 16.2.2, Subsection "A Problem with Register Optimization", page 485, lines 34-42 read
A Problem with Register Optimization
MPI provides operations that may be hidden from the user code and run concurrently with
it, accessing the same memory as user code. Examples include the data transfer for an
MPI_IRECV. The optimizer of a compiler will assume that it can recognize periods when a
copy of a variable can be kept in a register without reloading from or storing to memory.
When the user code is working with a register copy of some variable while the hidden
operation reads or writes the memory copy, problems occur. This section discusses register
optimization pitfalls.
-but should read*
Problems with Register Optimization and Temporary Memory Modifications
MPI provides operations that may be hidden from the user code and run concurrently with
it, accessing the same memory as user code. Examples include the data transfer for an
MPI_IRECV. The optimizer of a compiler will assume that it can recognize periods when a
copy of a variable can be kept in a register without reloading from or storing to memory.
When the user code is working with a register copy of some variable while the hidden
operation reads or writes the memory copy, problems occur. This section discusses register
optimization pitfalls and problems with temporary memory modifications.
These problems are independent of the Fortran support method, i.e.,
they occur with the mpi_f08 module, the mpi module, and the mif.h include file.
(for better readability of this ticket, the following new text is not underlined although it should)
This section shows four problematic usage areas
(the abbrevations in parentheses are used in the table below):
Usage of non-blocking routines (Nonblock).
Usage of one-sided routines (1sided).
Usage of MPI parallel file I/O split collective operations (Split).
Use of MPIBOTTOM together with absolute displacements
in MPI datatypes, or relative displacements between
to variables in such datatypes (Bottom)_.
The compiler is allowed to cause two optimization problems
Register optimization problems and code movements (Register).
Temporary memory modifications (Memory).
The optimization problems can occur not in all usage areas:
Nonblock
1sided
Split
Bottom
Register
occurs
occurs
-not-
occurs
Memory
occurs
occurs
occurs
-not-
The application writer has several methods to
circumvent parts of these problems with special declarations
for the used send and receive buffers:
Usage of the Fortran ASYNCHRONOUS attribute.
Usage of the Fortran TARGET attribute.
Usage of the helper routine MPI_F_SYNC_REG or a user-written
dummy routine DD(buf).
Declaring the buffer as Fortran module data or within a Fortran common block.
Usage of the Fortran VOLATILE attribute.
Each of these methods may solve only a subset of the problems,
may have more or less performance drawbacks, and may be
usable not in each application context.
The following table shows the usability of each method:
Nonblock
Nonbl.
1sided
1sided
Split
Bottom
overhead
Register
Memory
Regis.
Memory
Memo.
Regis.
may be
Examples
16.12,
16.xx
Sect.
16.11,
16.12new
11.7.3
16.11n
ASYNCHRONOUS
solved
solved
may be
may be
solvd
may be
medium
TARGET
solved
NOT s.
solved
NOT s.
NOTs.
solved
low-medi
MPI_F_SYNC_REG
solved
NOT s.
solved
NOT s.
NOTs.
solved
low
Module Data
solved
NOT s.
solved
NOT s.
NOTs.
solved
low-medi
VOLATILE
solved
solved
solved
solved
solvd
solved
high
The next paragraphs describe the problems in detail.
'''MPI-2.2, Section 16.2.2, Subsection "A Problem with Register Optimization", page 485, lines 43-48 read
When a variable is local to a Fortran subroutine (i.e., not in a module or COMMON
block), the compiler will assume that it cannot be modified by a called subroutine unless it
is an actual argument of the call. In the most common linkage convention, the subroutine
is expected to save and restore certain registers. Thus, the optimizer will assume that a
register which held a valid copy of such a variable before the call will still hold a valid copy
on return.
-but should read*
\paragraph{Nonblocking operations and register optimization / code movement.}
When a variable is local to a Fortran subroutine (i.e., not in a module or COMMON
block), the compiler will assume that it cannot be modified by a called subroutine unless it
is an actual argument of the call. In the most common linkage convention, the subroutine
is expected to save and restore certain registers. Thus, the optimizer will assume that a
register which held a valid copy of such a variable before the call will still hold a valid copy
on return.
'''MPI-2.2, Section 16.2.2, Subsection "A Problem with Register Optimization", page 486, lines 28-42 read
Example 16.12 shows extreme, but allowed, possibilities.
Example 16.12 Fortran 90 register optimization – extreme.
Source compiled as or compiled as
call MPI_IRECV(buf,..req) call MPI_IRECV(buf,..req) call MPI_IRECV(buf,..req)
register # buf b1buf
call MPI_WAIT(req,..) call MPI_WAIT(req,..) call MPI_WAIT(req,..)
b1 = buf b1 := register
MPI_WAIT on a concurrent thread modifies buf between the invocation of MPI_IRECV
and the finish of MPI_WAIT. But the compiler cannot see any possibility that buf can be
changed after MPI_IRECV has returned, and may schedule the load of buf earlier than
typed in the source. It has no reason to avoid using a register to hold buf across the call to
MPI_WAIT. It also may reorder the instructions as in the case on the right.
-and should be moved after page 485, line 48 and the following line should be added before the IRECV line:*
REAL :: buf, b1 REAL :: buf, b1 REAL :: buf, b1
-After this example, the following new example should be added*
(for better readability of this ticket, the following new text is not underlined although it should):
Example 16.12(new) Similar example with MPI_ISEND
Source compiled as and internally
REAL :: buf, copy REAL :: buf, copy REAL :: buf, copy
buf # val bufval buf = val
call MPI_ISEND(buf,..req) call MPI_ISEND(buf,..req) addr = &buf
copy # buf copy=buf copyval
buf # val_overwrite bufval_overwrite
call MPI_WAIT(req,..) call MPI_WAIT(req,..) send(*addr)
buf = val_overwrite
Due to allowed code movement, the content of buf may be already overwritten
when sending of the content of buf is executed.
The code movement is permitted, because the compiler cannot detect a possible access
to buf in MPI_WAIT (or in a second thread between the start of MPI_ISEND and the end
of MPI_WAIT).
Note, that code movement can also be executed across subroutine boundaries when
subroutines or functions are inlined.
This register optimization / code movement problem does not occur
with MPI parallel file I/O split collective operations,
because in the ..._BEGIN and ..._END calls,
the same buffer has to be provided as actual argument.
-After this example, the following new paragraph should be added*
(for better readability of this ticket, the following new text is not underlined although it should):
\paragraph{Nonblocking operations and temporary memory modifications.}
The compiler is allowed to modify temporarily data in the memory.
Example 16.xx shows a possibility.
Example 16.xx Overlapping Communication and Computation
USE mpi_f08
REAL :: buf(100,100)
CALL MPI_Irecv(buf(1,1:100),...req,...)
DO j=1,100
DO i=2,100
buf(i,j)=....
END DO
END DO
CALL MPI_Wait(req,...)
The compiler may substitute the nested loops through loop fusion by
EQUIVALENCE (buf(1,1), buf_1dim(1))
DO h=1,100
tmp(h)=buf(1,h)
END DO
DO j=1,10000
buf_1dim(h)=...
END DO
DO h=1,100
buf(1,h)=tmp(h)
END DO
with buf_1dim(10000) as the 1-dimensional equivalence of buf(100,100).
The nonblocking receive may receive the data in the boundary buf(1,1:100)
while the fused loop is using temporarily this part of the buffer.
When the tmp data is written back to buf, the old data is restored
and the received data is lost.
Note, that this problem occurs also
with one-sided communication
with the local buffer at the origin process
between an RMA call and the ensuing synchronization call
and with the window buffer at the target process
between two ensuing synchronization calls,
and also with MPI parallel file I/O split collective operations
with the local buffer between the ..._BEGIN and ..._END call.
This type of compiler optimization can be prevented when
buf is declared with the Fortran attribute ASYNCHRONOUS:
REAL, ASYNCHRONOUS :: buf(100,100)
\paragraph{One-sided communication.}
An example with instruction reordering due to register optimization can be found
in Section 11.7.3 on page 371.
'''MPI-2.2, Section 16.2.2, Subsection "A Problem with Register Optimization", page 486, lines 1-27 read
Normally users are not afflicted with this. But the user should pay attention to this [[BR]]
... [[BR]]
and MPI_BOTTOM.
-but should read*
\paragraph{MPI_BOTTOM and combining independent variables in datatypes.}
Normally users are not afflicted with this. But the user should pay attention to this [[BR]]
... [[BR]]
and MPI_BOTTOM.
-After these paragraphs, the following paragraphs should be added*
(for better readability of this ticket, the following new text is not underlined although it should):
Example 16.11(new) Similar example with MPI_SEND
This source ... can be compiled as:
! buf contains val_old ! buf contains val_old
buf = val_new ! dead code:
! buf=val_new is removed
call MPI_SEND(MPI_BOTTOM,1,type,...) call MPI_SEND(...)
! with buf as a displacement in type ! i.e. val_old is sent
buf # val_overwrite bufval_overwrite
Several successive assigments to the same variable can be combined in this way,
but only the last assignment is executed.
Successive means that no interfering read access to this variable is in between.
The compiler cannot detect that the call to MPI_SEND statement is interfering,
because the read access to buf is hidden by the usage of MPI_BOTTOM.
\paragraph{Solutions.}
The following paragraphes show in detail how these problems can be
solved in portabel way.
Several solutions are presented,
because all of these solutions have different implication on the performance.
Only one solution (with VOLATILE) solves all problems, but it may have
the most negative impact on the performance.
\paragraph{Fortran ASYNCHRONOUS attribute.}
Declaring a buffer with the Fortran ASYNCHRONOUS attribute in a scoping unit (or BLOCK)
tells the compiler that any statement of the scoping unit may be executed while the buffer
is affected by a pending asynchronous input/output operation.
Each library call (e.g., to an MPI routine) within the scoping unit may
contain a Fortran asynchronous I/O statement, e.g.,
the Fortran WAIT statement.
In the case of nonblocking MPI communication, the send and receive buffers should be
declared with the Fortran ASYNCHRONOUS attribute within each scoping unit (or BLOCK)
where the buffers are declared and statements are executed between
the start (e.g., MPI_IRECV) and completion (e.g., MPI_WAIT)
of the nonblocking communication.
Declaring REAL, ASYNCHRONOUS :: buf in Examples 16.12 and 16.12(new),
and REAL, ASYNCHRONOUS :: buf(100,100) in Examples 16.xx
solves the register optimization and temporary memory modification problem.
-Rationale.
A combination of a nonblocking MPI communication call with a buffer in
the argument list together with a subsequent call to MPI_WAIT or MPI_TEST
is similar to the combination a of Fortran asynchronous read or write together
with the matching Fortran wait statement.
To prevent incorrect register optimizations or code movement, the Fortran standard
requires in the case of Fortran IO, that the ASYNCHRONOUS attribute is defined for the buffer.
The ASYNCHONOUS attribute also works with the asynchronous MPI routines because the compiler
must expect that inside of the MPI routines such Fortran asynchronous read, write,
or wait routines may be called.
-(End of rationale.)
In the Examples 16.11 and 16.11(new) and also in the example in Section 11.7.3 on page 371,
the ASYNCHRONOUS attribute may also help
but the help is not guaranteed because there is not an IO counterpart to the
MPI usage.
-Rationale.
In case of using MPI_BOTTOM or one-sided synchronizations (e.g., MPI_WIN_FENCE),
the buffer is not specified, i.e., those calls can include only a Fortran WAIT statement
(or another routine that finishes an asynchronous IO). Additionally, with Fortran asynchronous IO,
it is a clear and forbidden race-condition when
storing new data into the buffer while an asynchronous IO is active.
Exactly this storing of data into the buffer is done in Example 16.11 when there would have
been an initialization buf=val_init prior to the call to MPI_RECV,
or in Example 16.11(new), the statement buf=val_new.
-(End of rationale.)
\paragraph{Fortran TARGET attribute.}
Declaring a buffer with the Fortran TARGET attribute in a scoping unit (or BLOCK)
tells the compiler that any statement of the scoping unit may be executed while
some pointer to the buffer exist.
Calling a library routine (e.g., an MPI routine) may imply that such a pointer is used to modify the buffer.
The TARGET attribute solves problems of instruction reordering, code movement, and register optimization
related to nonblocking and one-sided communication,
or related to the usage of MPI_BOTTOM and derived datatype handles.
Declaring REAL, TARGET :: buf solves the register optimization problem
in Examples 16.12, 16.12(new), 16.11, and 16.11(new).
Unfortunately, the TARGET attribute has not any impact on problems
caused by asynchronous accesses between the start and
end of a nonblocking or one-sided communication,
i.e., problems through temporary memory modifications are not solved.
Example 16.xx can not be solved with the TARGET attribute.
-MPI-2.2, Section 16.2.2, Subsection "A Problem with Register Optimization", page 486, line 43 - page 487, lines 25 reads*
To prevent instruction reordering or the allocation of a buffer in a register there are
two possibilities in portable Fortran code:
The compiler may be prevented from moving a reference to a buffer across a call to
an MPI subroutine by surrounding the call by calls to an external subroutine with
the buffer as an actual argument. Note that if the intent is declared in the external
subroutine, it must be OUT or INOUT. The subroutine itself may have an empty body,
but the compiler does not know this and has to assume that the buffer may be altered.
For example, the above call of MPI_RECV might be replaced by
(assuming that buf has type INTEGER). The compiler may be similarly prevented from
moving a reference to a variable across a call to an MPI subroutine.
In the case of a nonblocking call, as in the above call of MPI_WAIT, no reference to
the buffer is permitted until it has been verified that the transfer has been completed.
Therefore, in this case, the extra call ahead of the MPI call is not necessary, i.e., the
call of MPI_WAIT in the example might be replaced by
call MPI_WAIT(req,..)
call DD(buf)
-but should read*
\paragraph{Calling MPI_F_SYNC_REG.}
~~To prevent instruction reordering or the allocation of a buffer in a register there are
two possibilities in portable Fortran code:~~ [[BR]]
The compiler may be prevented from moving a reference to a buffer across a call to
an MPI subroutine by surrounding the call by calls to an external subroutine with
the buffer as an actual argument.
The MPI library provides MPI_F_SYNC_REG for this purpose, see Section 16.2.5(new) on page 489.
(for better readability of this ticket, the following new text is not underlined although it should)
Example 16.12 and 16.12(new) can be solved
by calling MPI_F_SYNC_REG(buf) once directly after MPI_WAIT.
The call MPI_F_SYNC_REG(buf) prevents moving the last line
before the MPI_WAIT call.
Further calls to MPI_F_SYNC_REG(buf) are not needed,
because it is still correct if the additional read access copy=buf
is moved behind MPI_WAIT and before buf=val_overwrite.
Example 16.11 and 16.11(new) can be solved with
two additional call MPI_F_SYNC_REG(buf), one directly
before MPI_RECV/MPI_SEND, and one directly after this communication operation.
The call to MPI_F_SYNC_REG(buf) is needed to finish all load and store
references to buf prior to MPI_RECV/SEND,
and the second call is needed to assure that the subsequent access to buf are not moved
before MPI_RECV/SEND.
In the example in Section 11.7.3 on page 371, two asynchronous accesses must be protected:
In Process 1, the access to bbbb must be protected similar to Example 16.12, i.e.,
a call to MPI_F_SYNC_REG(bbbb) is needed after the second MPI_WIN_FENCE to guarantee that
further accesses to bbbb are not moved ahead of the call to MPI_WIN_FENCE.
In Process 2, both calls to MPI_WIN_FENCE together act as a communication call with
MPI_BOTTOM as the buffer, i.e., before the first fence and after the second fence,
a call to MPI_F_SYNC_REG(buff) is needed to guarantee that accesses to buff are not moved
after or ahead of the calls to MPI_WIN_FENCE.
Using MPI_GET instead of MPI_PUT, the same calls to MPI_F_SYNC_REG are necessary.
Source of Process 1 Source of Process 2
bbbb # 777 buff999
call MPI_F_SYNC_REG(buff)
call MPI_WIN_FENCE call MPI_WIN_FENCE
call MPI_PUT(bbbb
into buff of process 2)
call MPI_WIN_FENCE call MPI_WIN_FENCE
call MPI_F_SYNC_REG(bbbb) call MPI_F_SYNC_REG(buff)
ccc = buff
The temporary memory modification problem, i.e., Example 16.xx, can not be solved with this method.
\paragraph{A user defined DD instead of MPI_F_SYNC_REG.}
Instead of MPI_F_SYNC_REG, one can use also a user defined external subroutine, which is separately compiled:
subroutine DD(buf)
real buf
end
Note that if the intent is declared in the external
subroutine, it must be OUT or INOUT. The subroutine itself may have an empty body,
but the compiler does not know this and has to assume that the buffer may be altered.
For example, the above call of MPI_RECV might be replaced by
-Section 16.2.2, Subsection "A Problem with Register Optimization" MPI-2.2, page 487, lines 26-31 read*
An alternative is to put the buffer or variable into a module or a common block and
access it through a USE or COMMON statement in each scope where it is referenced,
defined or appears as an actual argument in a call to an MPI routine. The compiler
will then have to assume that the MPI procedure (MPI_RECV in the above example)
may alter the buffer or variable, provided that the compiler cannot analyze that the
MPI procedure does not reference the module or common block.
-but should read*
\paragraph{Module data and COMMON blocks.}
An alternative is to put the buffer or variable into a module or a common block and
access it through a USE or COMMON statement in each scope where it is referenced,
defined or appears as an actual argument in a call to an MPI routine. The compiler
will then have to assume that the MPI procedure (MPI_RECV in the above example)
may alter the buffer or variable, provided that the compiler cannot analyze that the
MPI procedure does not reference the module or common block.
This method solves problems of instruction reordering, code movement, and register optimization
related to nonblocking and one-sided communication,
or related to the usage of MPI_BOTTOM and derived datatype handles.
Unfortunately, this method has not any impact on problems
caused by asynchronous accesses between the start and
end of a nonblocking or one-sided communication,
i.e., problems through temporary memory modifications are not solved.
-Section 16.2.2, Subsection "A Problem with Register Optimization" MPI-2.2, page 487, lines 33-35 read*
The VOLATILE attribute, available in later versions of Fortran, gives the buffer or variable
the properties needed, but it may inhibit optimization of any code containing the buffer
or variable.
-but should read*
\paragraph{Fortran VOLATILE attribute.}
The VOLATILE attribute, available in later versions of Fortran, gives the buffer or variable
the properties needed, but it may inhibit optimization of any code containing the buffer
or variable.
'''MPI-2.2, before Section 16.2.5 "Additional Support for Fortran Numeric Intrinsic Types", on page 489, line 31
the following new section is added,'''
i.e., "16.2.5(new)" means "16.2.5" and all subsequent existing sections are renumbered
(for better readability of this ticket, the following new text is not underlined although it should):
16.2.5(new) Additional Support for Fortran Register-Memory-Synchronization
As described in Section "A Problem with Register Optimization" on page 485, a dummy call
is needed to tell the compiler that registers are to be flushed for a given buffer.
It is a generic Fortran routine and has a Fortran binding only.
This routine has no operation associated with. It must be compiled in the MPI library in
the way that a Fortran compiler cannot detect in the module that the routine
has an empty body.
It is used only to tell the compiler that a cached register value of a variable or buffer
should be flushed, i.e., stored back to the memory (when necessary) or invalidated.
-Rationale.* This function is not available in other languages because it would not be
useful.
This routine has not an ierror return argument because there isn't any operation
that can detect an error.
-(End of rationale.)*
-Advice to implementors.
It is recommended to bind this routine to a C routine to minimize
the risk that the fortran compiler can learn that this routine is empty,
i.e., that the compiler can learn that a call to this routine can be removed
as part of the automated optimization.
-(End of advice to implementors.)
-Page 499, Example 16.13 and all following examples are renumbered to 16.14 ...*
Impact on Implementations
Impact on Applications / Users
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]]
yyy.
239-K: IERROR optionalSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
In the current MPI Fortran interface, the IERROR dummy argument is mandatory.
In the MPI C interface, the MPI routines can be called as a function
(i.e., the ierror value is returned) or as a procedure (i.e., ignoring
the ierror value), and therefore the ierror is optional.
With this ticket, the Fortran IERROR dummy argument is declared as optional
in all MPI routines that provide an IERROR.
-Details:*
For user-defined callback functions (e.g., comm_copy_attr_fn) and their
predefined callbacks (e.g., MPI_NULL_COPY_FN), ierror should not be optional,
i.e., these user-defined functions should not need to check whether the MPI
library calls these routine with or without an actual ierror output argument.
Extended Scope
None.
History
Since Fortran 90/95, the OPTIONAL attribute can be specified for dummy arguments.
If only the last argument is optional, then the routine can be called
with and without this last argument, i.e., only by using
positional arguments and without the need of using a keyword argument
(as in a3=33 in the second call).
All MPI Fortran subroutines have a return code in the last argument.
-but should read*
All MPI Fortran subroutines have a return code in the last argument.
With USE mpi_f08, this last argument is declared as OPTIONAL,
except for user-defined callback functions (e.g., comm_copy_attr_fn) and their
predefined callbacks (e.g., MPI_NULL_COPY_FN).
Text related to this ticket but shown in Ticket #230-B:
-_In new Section 16.2.5 Fortran Support through Module mpif08:*
All ierror output arguments are declared as optional,
except for user-defined callback functions (e.g., comm_copy_attr_fn) and their
predefined callbacks (e.g., MPI_NULL_COPY_FN). (#239-K)
-Rationale.
For user-defined callback functions (e.g., comm_copy_attr_fn) and their
predefined callbacks (e.g., MPI_NULL_COPY_FN), the ierror argument is not optional,
i.e., these user-defined functions should not need to check whether the MPI
library calls these routine with or without an actual ierror output argument.
-(End of rationale.) (#239-K)
Impact on Implementations
The wrapper from Fortran to C must check whether an actual IERROR argument
is provided by the calling Fortran application,
and only in this case, the ierror output from the C MPI routine
can and must be returned into the actual IERROR argument.
Impact on Applications / Users
For existing applications, there is no impact.
In modified or newly written applications, the actual IERROR argument
can be omitted.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]]
yyy.
240-L: New syntax used in all three (mpif.h, mpi, mpi_f08) See Ticket #229-A for an overview on the New MPI-3 Fortran Support.
241-M: Not including old deprecated routines from MPI-2.0 - MPI-2.2See Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: yes by acclamation.
Description
-Major decisions in this ticket:*
Not to include deprecated routines into the new Fortran 2008 bindings.
-Details:*
With this ticket, the Forum should decide that deprecated routines will not get
the new Fortran 2008 bindings.
There are no technical reasons for not providing these routines,
because normally, there isn't any difference
between the C backend of module mpi and mpi_f08.
Extended Scope
None.
History
Proposed Solution
No changes to Section 15.1 "Deprecated since MPI-2.0"
and Section 15.2. "Deprecated since MPI-2.2"
Text related to this ticket but shown in Ticket #230-B:
-_In new Section 16.2.5 Fortran Support through Module mpif08:*
With this module, new Fortran 2008 definitions are added for each MPI routine (#247-S),
except for routines that are deprecated (#241-M).
Impact on Implementations
None.
Impact on Applications / Users
With a switch to module mpi_f08, the deprecated routines must be substituted by
non-deprecated routines.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]]
yyy.
242-N: Arguments with INTENT=IN, OUT, INOUTSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Use of INTENT=IN, OUT, INOUT in all new Fortran 2008 bindings.
-Details:*
Most problems are already described in MPI-2.2,
MPI-2.2, Chapter 2, Terms and Convention, Section 2.3 Procedure Specification,
especially on page 10, lines 41 - page 11 line 5:
MPI's use of IN, OUT and INOUT is intended to indicate to the user how an argument
is to be used, but does not provide a rigorous classification that can be translated directly
into all language bindings (e.g., INTENT in Fortran 90 bindings or const in C bindings).
For instance, the "constant" MPI_BOTTOM can usually be passed to OUT buffer arguments.
Similarly, MPI_STATUS_IGNORE can be passed as the OUT status argument.
A common occurrence for MPI functions is an argument that is used as IN by some processes
and OUT by other processes. Such an argument is, syntactically, an INOUT argument
and is marked as such, although, semantically, it is not used in one call both for input and
for output on a single process.
Another frequent situation arises when an argument value is needed only by a subset
of the processes. When an argument is not significant at a process then an arbitrary value
can be passed as an argument.
Ticket #247-S and #248-T shows therefore for each MPI routine the appropriate decisions.
Ticket #247-S and #248-T must be carefully checked.
Extended Scope
None.
History
Since Fortran 90/95, the attributes INTENT(IN), INTENT(OUT),
or INTENT(INOUT), can be specified for dummy arguments.
Proposed Solution
The solution is implemented only in the Fortran routine definitions.
Text about this tickets are shown in other tickets, see
Tickets #230-B and #249-U.
The Fortran attribute INTENT(IN) is used for all arguments that are
IN arguments in the language-independent notation.
For OUT or INOUT arguments in the language-independent notation,
the Fortran attributes INTENT(OUT) or INTENT(INOUT) are used,
with following exceptions:
If there exists a constant that can be provided as an actual argument,
then an INTENT attribute is not specified. [[BR]]
Examples:
MPI_BOTTOM and MPI_IN_PLACE for buffer arguments;
MPI_STATUS(ES)_IGNORE for all OUT-status arguments;
If Ticket #244-P declares OUT-status arguments as optional
(through function overloading) then they will have INTENT(OUT).
MPI_ERRCODES_IGNORE in array_of_errcodes in MPI_Comm_spawn(_multiple);
If Ticket #244-P also declares the array_of_errcodes in
MPI_Comm_spawn(_multiple) as optional (through function overloading),
then the array_of_errcodes arguments will have INTENT(OUT).
MPI_UNWEIGHTED in sourceweights and destweights in MPI_Dist_graph_neighbors.
(The constants MPI_UNWEIGHTED in MPI_Dist_graph_create(_adjacent),
MPI_ARV_NULL, and ARGVS_NULL do not cause a problem,
because they are used in INTENT(IN) arguments.)
The argument is a handle type argument and is implemented in
C with call-by-value, then INTENT(IN) is specified.[[BR]]
Example:
All file-handles in MPI_Write routines;
the request in MPI_Grequest_complete.
New text:
'''Append a new paragraph in MPI-2.2, Section 16.2 "Fortran Support",
Subsection 16.2.2 "Problems with Fortran bindings for MPI",
at the end of Subsubsection "Special Constants" on page 484, line 33:'''
With USE mpi_f08, the attributes INTENT(IN),
INTENT(OUT), and INTENT(INOUT) are used in the Fortran
interface. In most cases INTENT(IN) is used if the C interface
uses call-by-value. For all buffer arguments and for OUT dummy arguments
that allow one of these special constants as input, an INTENT(...)
is not specified.
Text related to this ticket but shown in Ticket #230-B:
-_In new Section 16.2.5 Fortran Support through Module mpif08:*
Each argument is added an INTENT=IN, OUT, or INOUT if appropriate (#242-N).
Impact on Implementations
None.
Impact on Applications / Users
None.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]]
yyy.
243-O: Status as MPI_Status Fortran derived typeSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: 9 yes, 0 no, 5 abstain.
Description
-Major decisions in this ticket:*
To substitute the status(MPI_STATUS_SIZE) array by a MPI_Status Fortran derived type
-Details:*
The existing status(MPI_STATUS_SIZE) array fulfils already the new requirements
Incorrect actual arguments can be detected through compile-time argument checking.
Minimal changes to the applications.
Easy to use.
But the existing status(MPI_STATUS_SIZE) array programing interface is awkward.
Therefore, it is substituted by a TYPE(MPI_Status) derived type.
Extended Scope
None.
History
Since Fortran 90/95, Fortran's derived types are the way to express stuctures similar to C struct.
The C interface MPI_Status is defined with a C struct.
In Fortran,
status is an array of INTEGERs of size MPI_STATUS_SIZE. The constants
MPI_SOURCE, MPI_TAG and MPI_ERROR are the indices of the entries that store the source,
tag and error fields. Thus, status(MPI_SOURCE), status(MPI_TAG) and
status(MPI_ERROR) contain, respectively, the source, tag and error code of the received
message.
-but should read*
In Fortran with USE mpi or INCLUDE 'mpif.h',
status is an array of INTEGERs of size MPI_STATUS_SIZE. The constants
MPI_SOURCE, MPI_TAG and MPI_ERROR are the indices of the entries that store the source,
tag and error fields. Thus, status(MPI_SOURCE), status(MPI_TAG) and
status(MPI_ERROR) contain, respectively, the source, tag and error code of the received
message.
With Fortran USE mpi_f08, status is defined as the
Fortran derived type TYPE(MPI_Status), which contains three fields named MPI_SOURCE,
MPI_TAG, and MPI_ERROR; the derived type may contain additional fields.
Thus, status%MPI_SOURCE, status%MPI_TAG and status%MPI_ERROR contain the source,
tag, and error code, respectively, of the received message.
Additionally, within the mpi and the mpi_f08 module,
both, the constants MPI_STATUS_SIZE, MPI_SOURCE, MPI_TAG, MPI_ERROR,
and the TYPE(MPI_Status) is defined to allow with both modules the conversion
between both status declarations.
__*Rationale.*
It is not allowed to have the same name (e.g., MPI_SOURCE)
defined as a constant (e.g., Fortran parameter) and as a field
of a derived type.
-(End of rationale.)*__
__*Advice to implementors.*
The Fortran TYPE(MPI_Status) may be defined as a sequence derived type
to achieve the same data layout as in C.
-(End of advice to implementors.)*__
The following two procedures are provided in C to convert from a
Fortran status
(which is an array of integers) to a C status (which is a structure), and vice versa.
-but should read*
The following two procedures are provided in C to convert from a
Fortran (with the mpi module or mpif.h)__ status
(which is an array of integers) to a C status (which is a structure), and vice versa.
-At the end of MPI-2.2, Section 16.3.5 Status, page 502, lines 2-38,* [[BR]]
the following paragraph should be added: [[BR]]
(for better readability of this ticket, the following new text is not underlined although it should):
Using the mpi_f08 Fortran module, a status is declared as TYPE(MPI_Status).
The C datatype MPI_F_Status can be used to hand over a Fortran TYPE(MPI_Status) argument
into a C routine.
int MPI_Status_f082c(MPI_F_Status *f08_status, MPI_Status *c_status)
This C routine converts a Fortran mpi_f08 status into a C status.
int MPI_Status_c2f08(MPI_Status *c_status, MPI_F_Status *f08_status)
This C routine converts a C status into a Fortran mpi_f08 status.
-MPI-2.2, Appendix A.1.2 Types, page 524, after lines 2-44*
The following are defined C type definitions, included in the file mpi.h.
/* C opaque types */
MPI_Aint
MPI_Fint
MPI_Offset
MPI_Status
/* C handles to assorted structures */
MPI_Comm
MPI_Datatype
MPI_Errhandler
MPI_File
MPI_Group
MPI_Info
MPI_Op
MPI_Request
MPI_Win
// C++ opaque types (all within the MPI namespace)
...
...
// C++ handles to assorted structures (classes,
// all within the MPI namespace)
...
...
MPI::Win
-the following paragraph should be added:*
The following are defined Fortran type definitions, included in the mpi_f08 module.
! Fortran opaque types in the mpi_f08 module
TYPE(MPI_Status)
! Fortran handles in the mpi_f08 module
TYPE(MPI_Comm)
TYPE(MPI_Datatype)
TYPE(MPI_Errhandler)
TYPE(MPI_File)
TYPE(MPI_Group)
TYPE(MPI_Info)
TYPE(MPI_Op)
TYPE(MPI_Request)
TYPE(MPI_Win)
Impact on Implementations
None.
Impact on Applications / Users
None.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]]
yyy.
244-P: MPI_STATUS(ES)_IGNORE and MPI_ERRCODES_IGNORE through function overloadingSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: 5 yes, 2 no, 5 abstain.
[[BR]]Comment: Not a big win.
Description
-Major decisions in this ticket:*
(Part 1) To substitute MPI_STATUS(ES)_IGNORE within USE mpi_f08
by having the status and array_of_statuses OUT arguments as optional through
function overloading.
(Part 2) To substitute MPI_ARGV(S)_NULL and MPI_ERRCODES_IGNORE within USE mpi_f08
by having the argv, array_of_argv IN
and array_of_errcodes OUT arguments as optional through
function overloading in MPI_COMM_SPAWN and MPI_COMM_SPAWN_MULTIPLE.
(Part 3) To substitute MPI_UNWEIGHTED within USE mpi_f08
by having the sourcweights, destweights,
and weights OUT arguments as optional through
function overloading in MPI_DIST_GRAPH_CREATE_ADJACENT, MPI_DIST_GRAPH_CREATE, MPI_DIST_GRAPH_NEIGHBORS.
-Details:*
Using function overloading for status and "OPTIONAL" for ierror
allows, that the user can call such
routines without using keyword arguments, i.e., all four calls
are available
It is natural to implement optional arguments with the methods
available in modern languages instead of using work-arounds
that are not part of the language.
The existing special address constants
MPI_STATUS_IGNORE, MPI_STATUSES_IGNORE,
MPI_ARGV_NULL, MPI_ARGVS_NULL,
MPI_ERRCODES_IGNORE, and MPI_UNWEIGHTED
are not part of the Fortran language.
They must be viewed as a work-around outside of the language.
While "OPTIONAL" requires a branch at runtime, with function overloading,
the branch can be implemented at compile time.
On the other hand, function overloading doubles the number of routines.
Because ierror is an argument in all but two (Wtime+Wtick) routines,
status and array_of_statuses show up as OUT argument
in only 33 routines, and array_of_errcodes only in two routines.
MPI_STATUS_IGNORE, MPI_STATUSES_IGNORE,
MPI_ARGV_NULL, MPI_ARGVS_NULL,
MPI_ERRCODES_IGNORE, and MPI_UNWEIGHTED
are the only sixMPI_..._IGNORE
special constants.
Therefore, it makes sense to implement the function overloading for
all three or none.
Extended Scope
None.
History
All MPI_..._IGNORE special constants were introduced in MPI-2.0,
i.e., applications written in pure MPI-1.1 are nut affected.
Proposed Solution - Part 1
-MPI-2.2, Section 2.3 Procedure Specification, page 10, line 45 reads*
Similarly, MPI_STATUS_IGNORE can be passed as the OUT status
argument.
-but should read*
Similarly, MPI_STATUS_IGNORE can be passed as the OUT status
argument (with mpi.h, the mpi module or mpif.h).
The same approach is followed for other array arguments. In some cases NULL handles are
considered valid entries. When a NULL argument is desired for an array of statuses, one
uses MPI_STATUSES_IGNORE.
-but should read*
The same approach is followed for other array arguments. In some cases NULL handles are
considered valid entries. When a NULL argument is desired for an array of statuses, one
uses MPI_STATUSES_IGNORE.
With the mpi_f08 module, optional arguments through function overloading
is used instead of [[BR]]
MPI_STATUS_IGNORE, MPI_STATUSES_IGNORE,(if #244-P Part 1 is accepted) [[BR]]
MPI_ARGV_NULL, MPI_ARGVS_NULL, MPI_ERRCODES_IGNORE,(#244-P Part 2) [[BR]]
and MPI_UNWEIGHTED.(#244-P Part 3) [[BR]]
-(Without #244-P Part 2 and/or Part 3:)* [[BR]]
The constantsMPI_ARGV_NULL, MPI_ARGVS_NULL, MPI_ERRCODES_IGNORE,(without Part 2) [[BR]]
and MPI_UNWEIGHTED(without Part 3) [[BR]]
are not substituted by function overloading.
To cope with this problem, there are two predefined constants, MPI_STATUS_IGNORE
and MPI_STATUSES_IGNORE, which when passed to a receive, wait, or test function, inform
the implementation that the status fields are not to be filled in. Note that
-but should read*
To cope with this problem, there are two predefined constants, MPI_STATUS_IGNORE
and MPI_STATUSES_IGNORE with
the C language bindings and the Fortran bindings through the
mpi module and the mpif.h include file,
which when passed to a receive, wait, or test function, inform
the implementation that the status fields are not to be filled in. Note that
There are no C++ bindings for MPI_STATUS_IGNORE or MPI_STATUSES_IGNORE.
To allow an OUT or INOUT MPI::Status
argument to be ignored, all MPI C++ bindings that have
OUT or INOUT MPI::Status
parameters are overloaded with a second version that omits the
OUT or INOUT MPI::Status parameter.
Example 3.1 The C++ bindings for MPI_PROBE are:
void MPI::Comm::Probe(int source, int tag, MPI::Status& status) const[[BR]]
void MPI::Comm::Probe(int source, int tag) const
-but should read*
There are no C++ bindings for MPI_STATUS_IGNORE or MPI_STATUSES_IGNORE.
With the Fortran bindings through the mpi_f08 module and the C++ bindings,
MPI_STATUS_IGNORE or MPI_STATUSES_IGNORE does not exist.__
To allow an OUT or INOUT TYPE(MPI_Status) or MPI::Status
argument to be ignored, all MPI mpi_f08 and C++ bindings that have
OUT or INOUT TYPE(MPI_Status) or MPI::Status
parameters are overloaded with a second version that omits the
OUT or INOUT TYPE(MPI_Status) or MPI::Status parameter.
Example 3.1 The mpi_f08 bindings for MPI_PROBE are:SUBROUTINE MPI_Probe(source, tag, comm, status, ierror) [[BR]]
INTEGER, INTENT(IN) :: source, tag [[BR]]
TYPE(MPI_Comm), INTENT(IN) :: comm [[BR]]
TYPE(MPI_Status), INTENT(OUT) :: status [[BR]]
INTEGER, OPTIONAL, INTENT(OUT) :: ierror [[BR]]
END SUBROUTINE [[BR]]
SUBROUTINE MPI_Probe(source, tag, comm, ierror) [[BR]]
INTEGER, INTENT(IN) :: source, tag [[BR]]
TYPE(MPI_Comm), INTENT(IN) :: comm [[BR]]
INTEGER, OPTIONAL, INTENT(OUT) :: ierror [[BR]]
END SUBROUTINE [[BR]]
Example 3.12 The C++ bindings for MPI_PROBE are:
void MPI::Comm::Probe(int source, int tag, MPI::Status& status) const[[BR]]
void MPI::Comm::Probe(int source, int tag) const
In C or Fortran, an
application may pass MPI_ERRCODES_IGNORE if it is not interested in the error codes.
In C++ this constant does not exist,
and the array_of_errcodes argument may be omitted from the argument list.
-Advice to implementors.*
MPI_ERRCODES_IGNORE in Fortran
is a special type of constant, like MPI_BOTTOM.
See the discussion in Section 2.5.4 on page 14.
-(End of advice to implementors.)*
-but should read*
In C or in the Fortran mpi module or mpif.h include file, an
application may pass MPI_ERRCODES_IGNORE if it is not interested in the error codes.
In the Fortran mpi_f08 module or in C++ this constant does not exist,
and the array_of_errcodes argument may be omitted from the argument list.
-Advice to implementors.*
__In the Fortran `mpi` module or `mpif.h` include file,__ MPI_ERRCODES_IGNORE ~~in Fortran~~
is a special type of constant, like MPI_BOTTOM.
See the discussion in Section 2.5.4 on page 14.
__In the Fortran `mpi_f08` module, the optional argument
has to be implemented through function overloading.
See the discussion in Section 2.5.2 on page 14.
-(End of advice to implementors.)*
In both cases, the callback is passed a reference to the corresponding
status variable passed by the user to the MPI call; the status set by the callback function
is returned by the MPI call. If the user provided MPI_STATUS_IGNORE or
MPI_STATUSES_IGNORE to the MPI function that causes query_fn to be
called,
then MPI will pass a valid status object to query_fn,
and this status will be ignored upon return of the callback function.
-but should read*
In both cases, the callback is passed a reference to the corresponding
status variable passed by the user to the MPI call; the status set by the callback function
is returned by the MPI call. If the user provided MPI_STATUS_IGNORE or
MPI_STATUSES_IGNORE to the MPI function that causes query_fn to be
called__ or has omitted the status argument (with the mpi_f08 Fortran module or C++)__,
then MPI will pass a valid status object to query_fn,
and this status will be ignored upon return of the callback function.
However, if the MPI function was passed MPI_STATUSES_IGNORE,
then the individual error codes returned by each callback functions will be lost.
-but should read*
However, if the MPI function was passed MPI_STATUSES_IGNORE or the status argument was omitted,
then the individual error codes returned by each callback functions will be lost.
'''MPI-2.2, Section 13.4.1 Data Access Routines,
Subsection Data Access Conventions, page 406, lines 44-46 read'''
The user can pass (in C and Fortran)
MPI_STATUS_IGNORE in the status argument if the return value of this argument is not needed.
In C++,
the status argument is optional.
-but should read*
The user can pass (in C and with the Fortran mpi module or mpif.h include file)
MPI_STATUS_IGNORE in the status argument if the return value of this argument is not needed.
With the Fortran mpi_f08 module or inIn C++,
the status argument is optional.
-MPI-2.2, Section 16.2.2 Problems With Fortran Bindings for MPI, page 481, lines 26-30 read*
Several named “constants,” such as MPI_BOTTOM, MPI_IN_PLACE,
MPI_STATUS_IGNORE, MPI_STATUSES_IGNORE, MPI_ERRCODES_IGNORE,
MPI_UNWEIGHTED, MPI_ARGV_NULL, and MPI_ARGVS_NULL are not ordinary Fortran
constants and require a special implementation. See Section 2.5.4 on page 14 for more
information.
Moreover, “constants” such
as MPI_BOTTOM and MPI_STATUS_IGNORE are not constants as defined by Fortran,
but “special addresses” used in a nonstandard way.
-and need no modifications within this ticket.*
-MPI-2.2, Section 16.3.5 Status, page 502, needs no modifications within this ticket.*
Also constant "addresses," i.e., special values for reference arguments that are not handles,
such as MPI_BOTTOM or MPI_STATUS_IGNORE may have different values in different
languages.
-and need no modifications within this ticket.*
'''MPI-2.2, Appendix A.1.1 Defined Constants,
Table "Constants Specifying Empty or Ignored Input",
page 523, lines 22-36, left column reads'''
C/Fortran name
C type / Fortran type
MPI_ARGVS_NULL
char*** / 2-dim. array of CHARACTER*(*)
MPI_ARGV_NULL
char** / array of CHARACTER*(*)
MPI_ERRCODES_IGNORE
int* / INTEGER array
MPI_STATUSES_IGNORE
MPI_Status* / INTEGER, DIMENSION(MPI_STATUS_SIZE,*)
MPI_STATUS_IGNORE
MPI_Status* / INTEGER, DIMENSION(MPI_STATUS_SIZE)
MPI_UNWEIGHTED
-but should read*
C/Fortran name
C type / Fortran type with mpi / mpi_f08 module
MPI_ARGVS_NULL
char*** / 2-dim. array of CHARACTER*(*) / not defined (with #244-P Part 2)
char*** / 2-dim. array of CHARACTER*(*) / as with mpi (without P. 2)
MPI_ARGV_NULL
char** / array of CHARACTER*(*) / not defined (with #244-P Part 2)
char** / array of CHARACTER*(*) / as with mpi (without P. 2)
MPI_ERRCODES_IGNORE
int* / INTEGER array / not defined (with #244-P Part 2)
int* / INTEGER array / as with mpi (without P. 2)
MPI_STATUSES_IGNORE
MPI_Status* / INTEGER, DIMENSION(MPI_STATUS_SIZE,*) / not defined (with #244-P Part 1)
MPI_Status* / INTEGER, DIMENSION(MPI_STATUS_SIZE,*) / as with mpi (without P. 1)
MPI_STATUS_IGNORE
MPI_Status* / INTEGER, DIMENSION(MPI_STATUS_SIZE) / not defined (with #244-P Part 1)
MPI_Status* / INTEGER, DIMENSION(MPI_STATUS_SIZE) / as with mpi (without P. 1)
MPI_UNWEIGHTED
int* / INTEGER, DIMENSION(*) / not defined (with #244-P Part 3)
int* / INTEGER, DIMENSION(*) / as with mpi (without P. 3)
MPI-2.2, Section A.4.18 Inter-language Operability, page 591, line 43 - page 592, line 6 reads
Since there are no C++ MPI::STATUS_IGNORE and MPI::STATUSES_IGNORE objects, the
result of promoting the C or Fortran handles (MPI_STATUS_IGNORE and
MPI_STATUSES_IGNORE) to C++ is undefined.
(With the alternative Solution: All existing usage of MPI_STATUS_IGNORE
must be substituted by using the optinal call syntax.)
Alternative Solutions
-Major decision in this alternatve solution:*
Implement MPI_STATUS(ES)_IGNORE through function overloading in the new Fortran 2008 binding,
Not to provide the special constants MPI_STATUS(ES)_IGNORE in mpi_f08.
-Details:*
In new Section 16.2.5 "Fortran Support through Module mpi_f08" added by Ticket #230-B,
one must add before the list item about IERROR:
All status and array_of_statuses output
arguments are declared as optional(only with #244-P Alternative Solution).
Same for Ticket #247-S.
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]]
yyy.
245-Q: MPI_ALLOC_MEM and FortranSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
None.
Description
-Major decisions in this ticket:*
How to use MPI_ALLOC_MEM together with C-Pointers in Fortran.[[BR]]
How to use MPI_ALLOC_MEM together with allocatable arrays?
[[BR]]TODO!!!
-Details:*
-To be done.*
Extended Scope
None.
History
Proposed Solution
MPI-2.2, Section 8.2 Memory Allocation
-TODO*: Declaration of MPI_ALLOC_MEM, MPI_FREE_MEM
-TODO*: MPI-2.2, Section 8.2 memory Allocation, Example 8.1 on page 275, lines 42 - 2 on next page:
`REAL A `
[[BR]]`POINTER (P, A(100,100)) ! no memory is allocated`
[[BR]]`CALL MPI_ALLOC_MEM(4*100*100, MPI_INFO_NULL, P, IERR)`
[[BR]]`! memory is allocated`
[[BR]]`...`
[[BR]]`A(3,5) = 2.71;`
[[BR]]`...`
[[BR]]`CALL MPI_FREE_MEM(A, IERR) ! memory is freed`
and text MPI-2.2, Section 8.2 memory Allocation, page 276, lines 4-6,
Since standard Fortran does not support (C-like) pointers, this code is not Fortran 77
or Fortran 90 code. Some compilers (in particular, at the time of writing, g77 and Fortran
compilers for Intel) do not support this code.
and MPI-2.2, Section 11.4.3 Lock, page 358, lines 23-28:
The downside of this decision is that passive target communication cannot be used
without taking advantage of nonstandard Fortran features: namely, the availability
of C-like pointers; these are not supported by some Fortran compilers (g77 and Windows/
NT compilers, at the time of writing). Also, passive target communication
cannot be portably targeted to COMMON blocks, or other statically declared Fortran
arrays. (End of rationale.)
A new version with Fortran C-binding pointer must be added.
As far as I know, this does not change the interface,
i.e., the new example should be valid for all three,
include file mpif.h and modules mpi and mpi_f08.
-To be done.*
Impact on Implementations
None.
Impact on Applications / Users
Users of MPI_ALLOC_MEM may (but need not) switch from Cray-pointers to C-Pointers.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]]
yyy.
246-R: Upper and lower case letters in new Fortran bindingsSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Description
-Major decisions in this ticket:*
The new mpi_f08 interface description uses
Upper case for Fortran keywords and MPI constants
MPI_Xxxxx for MPI routines and MPI Fortran (dervied) types, e.g., MPI_Comm
Lower case for all dummy argument names
-Example:*
SUBROUTINE MPI_Recv(buf, count, datatype, source, tag, comm, status, ierror)
TYPE(*), DIMENSION(..) :: buf
TYPE(MPI_Datatype), INTENT(IN) :: datatype
TYPE(MPI_Comm), INTENT(IN) :: comm
INTEGER, INTENT(IN) :: count, source, tag
TYPE(MPI_Status), INTENT(OUT) :: status ! optional by overloading
INTEGER, OPTIONAL, INTENT(OUT) :: ierror
END
Extended Scope
None.
History
MPI-2.2 does not explain the usage of lower and uppercase names,
not for C, nor for Fortran.
Therefore, wording need not to be changed.
Proposed Solution
Use the rules in the description for the language bindings
shown in Ticket #247-S.
Impact on Implementations
None, because Fortran is case insensitive.
Impact on Applications / Users
None, because Fortran is case insensitive.
Additionally, all constant handles,
including the C datatype handles used for Fortran types are all
in upper case, therefore no changes.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]]
yyy.
247-S: All new Fortran 2008 bindings - Part 1See Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Description
-Major decisions in this ticket:*
Detailed decision about all new Fortran 2008 interfaces in mpi_f08.
-Details:*
This ticket provides the rule for converting existing Fortran interfaces
into new Fortran 2008 interfaces.
__When using the mpi_f08 module, the declaration is:
SUBROUTINE user_function(invec, inoutvec, len, type)
TYPE(*) :: invec(len), inoutvec(len)
INTEGER :: len
TYPE(MPI_Datatype) :: type
__
-CAUTION:* If Ticket #234-F does not pass, then the new TYPE(*) line above must be
substituted by
[[BR]]<type> invec(len), inoutvec(len)
-CAUTION:* If Ticket #231-C does not pass, then the new INTEGER and the new TYPE(MPI_Datatype)
line above must be substituted by
[[BR]]INTEGER :: len, type
The Fortran version of MPI_REDUCE will invoke a user-defined reduce function using
the Fortran calling conventions and will pass a Fortran-type datatype argument; the
C version will use C calling convention and the C representation of a datatype handle.
Users who plan to mix languages should define their reduction functions accordingly.
[[BR]](End of advice to users.)
-but should read*
The Fortran version of MPI_REDUCE will invoke a user-defined reduce function using
the Fortran calling conventions and will pass a Fortran-type datatype argument; the
C version will use C calling convention and the C representation of a datatype handle.
If a Fortran user-defined reduce function is used, then the calling sequence
further depends on whether MPI_OP_CREATE was invoked via the mpif.h or USE mpi interface,
or the USE mpi_f08 interface.
Users who plan to mix languages should define their reduction functions accordingly.
[[BR]](End of advice to users.)
The Fortran MPI-2 language bindings have been designed to be compatible with the Fortran
90 standard (and later). These bindings are in most cases compatible with Fortran 77,
implicit-style interfaces.
-but should read*
The Fortran MPI-2 language bindings have been designed to be compatible with the Fortran
90 standard (and later). These bindings are in most cases compatible with Fortran 77,
implicit-style interfaces.
MPI defines two levels of Fortran support,
described in Sections 16.2.3 and 16.2.4. In
the rest of this section, "Fortran" and "Fortran 90" shall refer to "Fortran 90" and its
successors, unless qualified.
Basic Fortran Support An implementation with this level of Fortran support provides
the original Fortran bindings specified in MPI-1, with small additional requirements
specified in Section 16.2.3.
Extended Fortran Support An implementation with this level of Fortran support
provides Basic Fortran Support plus additional features that specifically support
Fortran 90, as described in Section 16.2.4.
A compliant MPI-2 implementation providing a Fortran interface must provide Extended
Fortran Support unless the target compiler does not support modules or KIND-
parameterized types.
-but should read*
MPI defines twothree levels of Fortran support,
described in Sections 16.2.3,~~ and~~ 16.2.4, and 16.2.6. In
the rest of this section, "Fortran" and "Fortran 90" shall refer to "Fortran 90" and its
successors, unless qualified.
Basic Fortran Support An implementation with this level of Fortran support provides
the original Fortran bindings specified in MPI-1, with small additional requirements
specified in Section 16.2.3.
Extended Fortran Support An implementation with this level of Fortran support
provides Basic Fortran Support plus additional features that specifically support
Fortran 90, as described in Section 16.2.4.
__3. Advanced Fortran Support An implementation with this level of Fortran support
provides Extended Fortran Support plus additional features that partially require
Fortran 2008, as described in Section 16.2.6.
A compliant MPI-2 implementation providing a Fortran interface must provide Extended
Fortran Support unless the target compiler does not support modules or KIND-
parameterized types.
*A compliant MPI-3 implementation providing a Fortran interface must provide Advanced
Fortran Support unless the target compiler does not support explicit interfaces with
`TYPE(), DIMENSION(..)`.**
-After MPI-2.2, Section 16.2.5, page 497, line 19, the following section is added:*
[[BR]]The ticket numbers in parenthesis (#xxx-X) indicate sentences that are removed if the appropriate
ticket is not voted in.
16.2.6 Advanced Fortran Support
The include file mpif.f is deprecated (#233-E).
The module mpi guarantees compile-time argument checking
except for all choice arguments, i.e., the buffers (#232-D).
A new module mpi_f08 is introduced.
This module guarantees compile-time argument checking.
All handles are defined with named types
(instead of INTEGER handles in module mpi) (#231-C).
The buffers are declared with the new Fortran 2008 feature
assumed type and assumed rank "TYPE(*), DIMENSION(..)"
and with this, non-contiguous sub-arrays are now valid also
in nonblocking routines (#234-F).
With this module new Fortran 2008 definitions are added for each MPI routine (#247-S),
except for routines that are deprecated (#241-M).
Each argument is added an INTENT=IN, OUT, or INOUT if appropriate (#242-N).
All status and array_of_statuses output
arguments are declared as optional(only with #244-P Alternative Solution).
All ierror output arguments are declared as optional(#239-K).
All ierror output arguments are declared as optional,
except for user-defined callback functions (e.g., comm_copy_attr_fn) and their
predefined callbacks (e.g., MPI_NULL_COPY_FN). (#239-K)
If the target compiler does not support explicit interfaces with
assumed type and assumed rank, then the use of non-contiguous sub-arrays
in nonblocking calls may be restricted as with module mpi(#234-F).
-Advice to implementors. (#232-D) *
In module mpi, with most compilers the choice argument can be implemented with the
following explicit interface:
It is explicitly allowed that the choice arguments are implemented
in the same way as with module mpoi_f08.
-(End of advice to implementors.)*
-Rationale.
For user-defined callback functions (e.g., comm_copy_attr_fn) and their
predefined callbacks (e.g., MPI_NULL_COPY_FN), the ierror argument is not optional,
i.e., these user-defined functions need not to check whether the MPI
library calls these routine with or without an actual ierror output argument.
-(End of rationale.) (#239-K)
Impact on Implementations
See Ticket #230-B.
Impact on Applications / Users
See Ticket #230-B.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]]
yyy.
250-V: Minor Corrections in Fortran InterfacesSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: yes by acclamation.
Description
-Major decisions in this ticket:*
Typo in the existing Fortran Interface of MPI_INTERCOMM_MERGE:[[BR]]
INTRACOMM --> NEWINTRACOMM
Remove double definition of request in
the Fortran binding type declaration part of MPI_SEND_INIT
and MPI_BSEND_INIT
Substitute Fortran callback prototype names ending on "_FN"
by same name ending on "_FUNCTION".
-Details:*
Typo in the existing Fortran Interface of MPI_INTERCOMM_MERGE:[[BR]]
INTRACOMM --> NEWINTRACOMM[[BR]]
-Reason:* With new MPI-3.0 explicit Fortran interfaces, applications can
freely choose between positional argument lists and keyword based
argument lists. For this, the first time, the names of the dummy
arguments are relevant. Therefore in all language bindings,
the dummy argument names should be identical
to the language-independent dummy argument names.
All callback prototypes end with _FUNCTION and all dummy arguments
and predefined values for such callback functions end with _FN.
There is only one set of errors in the Fortran interfaces in the document.
To change these names is only an editing change, not a modification of the
MPI interface, because these Fortran names are not part of mpif.h.
To be modified:
COMM_COPY_ATTR_FN --> COMM_COPY_ATTR_FUNCTION
COMM_DELETE_ATTR_FN --> COMM_DELETE_ATTR_FUNCTION
WIN_COPY_ATTR_FN --> WIN_COPY_ATTR_FUNCTION
WIN_DELETE_ATTR_FN --> WIN_DELETE_ATTR_FUNCTION
TYPE_COPY_ATTR_FN --> TYPE_COPY_ATTR_FUNCTION
TYPE_DELETE_ATTR_FN --> TYPE_DELETE_ATTR_FUNCTION
Extended Scope
None.
History
Proposed Solution
-_MPI-2.2, Section 3.9 Persistent Communication Requests, in the Fortran declaration of MPI_SEND_INIT, page 70, line 3 reads_*
'''MPI-2.2, Section 6.7.2 Communicators, page 226, line 44 [[BR]]
and Appendix A.1.1 Constants, page 520, lines 14, 17, one should modify (3 times):'''
COMM_COPY_ATTR_FN --> COMM_COPY_ATTR_FUNCTION
'''MPI-2.2, Section 6.7.2 Communicators, page 227, line 5 [[BR]]
and Appendix A.1.1 Constants, page 520, line 20, one should modify (2 times):'''
COMM_DELETE_ATTR_FN --> COMM_DELETE_ATTR_FUNCTION
'''MPI-2.2, Section 6.7.3 Windows, page 231, line 40 [[BR]]
and Appendix A.1.1 Constants, page 520, lines 23, 26, one should modify (3 times):'''
WIN_COPY_ATTR_FN --> WIN_COPY_ATTR_FUNCTION
'''MPI-2.2, Section 6.7.3 Windows, page 232, line 1 [[BR]]
and Appendix A.1.1 Constants, page 520, line 29, one should modify (2 times):'''
WIN_DELETE_ATTR_FN --> WIN_DELETE_ATTR_FUNCTION
'''MPI-2.2, Section 6.7.4 Datatypes, page 234, line 28 [[BR]]
and Appendix A.1.1 Constants, page 520, lines 32, 35, one should modify (3 times):'''
TYPE_COPY_ATTR_FN --> TYPE_COPY_ATTR_FUNCTION
'''MPI-2.2, Section 6.7.4 Datatypes, page 234, line 36 [[BR]]
and Appendix A.1.1 Constants, page 520, line 38, one should modify (2 times):'''
TYPE_DELETE_ATTR_FN --> TYPE_DELETE_ATTR_FUNCTION
Impact on Implementations
Correction of module mpi and mpif.h.
Impact on Applications / Users
None.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]]
yyy.
252-W: Substituting dummy argument name "type" by "datatype" or "oldtype", and othersSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: yes by acclamation.
Description
-Major decisions in this ticket:*
To mimimize conflicts with language keywords (TYPE in Fortran),
the dummy argument name "type" is substituted by "datatype" or "oldtype".
Substitute callback dummy argument name "function" by the existing
callback prototype name, e.g., comm_errhandler_function.
-Details:*
The problem with "type" arises with the following MPI library routines:
MPI_Type_dup(type, newtype) [[BR]]
type --> oldtype
MPI_Type_set_attr(type, type_keyval, attribute_val) [[BR]]
type --> datatype
MPI_Type_get_attr(type, type_keyval, attribute_val, flag) [[BR]]
type --> datatype
MPI_Type_delete_attr(type, type_keyval) [[BR]]
type --> datatype
MPI_Type_set_name(type, type_name) [[BR]]
type --> datatype
MPI_Type_get_name(type, type_name, resultlen) [[BR]]
type --> datatype
MPI_Type_match_size(typeclass, size, type) [[BR]]
type --> datatype
and with the following callback prototype:
typedef int MPI_Type_delete_attr_function(MPI_Datatype type, int type_keyval, void attribute_val, void extra_state); [[BR]]
type --> datatype
SUBROUTINE TYPE_DELETE_ATTR_FN(TYPE, TYPE_KEYVAL, ATTRIBUTE_VAL, EXTRA_STATE, IERROR) [[BR]]
type --> datatype
and with the predefined callbacks
MPI_TYPE_NULL_DELETE_FN(MPI_Datatype type, int type_keyval, void attribute_val, void extra_state) [[BR]]
type --> datatype
MPI_TYPE_NULL_DELETE_FN(TYPE, TYPE_KEYVAL, ATTRIBUTE_VAL, EXTRA_STATE) [[BR]]
type --> datatype
The problem with "fuction" arises with the following MPI library routines:
MPI_Op_create( function, commute, op) [[BR]]
function --> user_fn
MPI_Comm_create_errhandler(function, errhandler) [[BR]]
function --> comm_errhandler_fn
MPI_Win_create_errhandler(function, errhandler) [[BR]]
function --> win_errhandler_fn
MPI_File_create_errhandler(function, errhandler) [[BR]]
function --> file_errhandler_fn
MPI_Errhandler_create(function, errhandler) [[BR]]
function --> handler_fn
The change "_FN" --> "_FUNCTION" in callback prototype names
is necessary to have same names in C and Fortran,
and to have a clear distinguishing between prototype names
(with _FUNCTION) and predefined arguments (always with _FN).
With new MPI-3.0 explicit Fortran interfaces, applications can
freely choose between positional argument lists and keyword based
argument lists. For this, the first time, the names of the dummy
arguments are relevant. Therefore, the dummy argument names should
not be in a conflicht with language keywords.
Current Fortran can resolve such conflicts, but it is bad
programming practice to use variable names identical to
Fortran keywords.
In the MPI-2.2 specification, this problem arises with
the Fortran keyword "TYPE".
In addition, in all language bindings,
the dummy argument names should be identical
to the language-independent dummy argument names.
MPI-3.0 will be the last time, that dummy argument names
can be changed without any conflicts for existing
application programs.
In the C binding, dummy argument name changes do not matter.
Extended Scope
None.
History
Proposed Solution
For MPI_Type_dup(type, newtype),
[[BR]] on MPI-2.2 Sect. 4.1.10, page 100, lines 36, 37, 41, 42, 43, and page 101, lines 3, 7,
[[BR]] the dummy argument name type must be substituted (7 times) by oldtype.
...
(snip - see original #252-W)
Impact on Implementations
The dummy argument names in the header files mpi.h and mpif.h
and in the Fortran modules mpi and mpif_08 must be changed.
The C library routines need not to be changed.
Originally by RolfRabenseifner on 2010-09-01 11:53:06 -0500
(This ticket currently contains all tickets #A - #X for printing purpose)
229-A: Overview over all related tickets## Description
This ticket gives an overview over all related new MPI-3 Fortran tickets. It is intended that they are independent or that the require-relation is only uni-directional. Therefore they have independent voting.
All these tickets are owned by Rolf Rabenseifner, Craig Rasmussen, and Jeff Squyres together.
Extended Scope
None.
History
Nomenclature:
MPI-1.0 - MPI-1.3 were based on Fortran 77 interfaces. In MPI-2.0 - MPI-2.2, all interfaces are "Fortran" interfaces. In many cases, new Fortran 90 methods (e.g.,
KIND=....
) are used. Buffers are defined as<type> BUF(*)
, which isn't a Fortran notation. The behavior is defined through the usage in implicit interfaces (old Fortran 77 style subroutine definitions). All handles are defined asINTEGER
.MPI libraries are allowed to enable compile-time argument checking of MPI applications, as long as the application behaves as with implicit interfaces (i.e., no compile-time argument checking).
To enable compile-time argument checking, current MPI libraries use special non-standard options for the buffer arguments.
Another problem area is the handling of Fortran optimization together with nonblocking MPI routines.
Proposed Solution
Major goals of the New MPI-3 Fortran support are:
To achieve a high compile-time argument checking quality together with acceptable backward compatibility, the new features require the use of a new
USE mpi_f08
module. Parts of the features are also included in existingUSE mpi
. Old styleinclude 'mpif.h'
is kept and can continues to offer an old style interface with (old Fortran 77) implicit interfaces, but the use of 'mpif.h' is strongly discouraged.Impact on Implementations
The C-based backend of MPI routines with buffers is doubled. [[BR]] Routines without buffer arguments can use the same interface for the existing
INCLUDE 'mpif.h'
andUSE mpi
and the newUSE mpi_f08
.Impact on Applications / Users
None, as long as the application uses already an MPI library with compile-time argument checking. Applications that are not consistent with compile-time argument checking may require some bug corrections. Those application bugs are semantically correct programs, but syntactically wrong according to the definition of MPI. If an application programmer does not resolve those application bugs, he/she is still able to switch to
include 'mpif.h
and to postpone the fixing of his/her application bugs.Alternative Solutions
See inside of ticket descriptions.
Entry for the Change Log
See inside of ticket descriptions.
230-B: New module "USE mpi_f08"See Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Description
-Major decisions in this ticket:*
mpi_f08
module.mpi_f08
module for all new features to keep compatibility for existing Fortran interface with existingmpi
module.Further details are handled in further tickets:
Extended Scope
None.
History
Current MPI-2.2 requires that mpif.h contains full MPI-2.2 because in the Extended Fortran Support, the standard requires (MPI-2.2, page 489, line 7):
Applications may use either the
mpi
module or thempif.h
include file.Proposed Solution
''The ticket numbers in parenthesis (#xxx-X) indicate sentences that are removed if the appropriate ticket is not voted in.''
-MPI-2.2, Chapter 1, Introduction, Page 1, lines 23-25 read*
MPI is not a language, and all MPI operations are expressed as functions, subroutines, or methods, according to the appropriate language bindings, which for C, C++, Fortran-77, and Fortran-95, are part of the MPI standard.
-but should read*
MPI is not a language, and all MPI operations are expressed as functions, subroutines, or methods, according to the appropriate language bindings, which for C, C++,
Fortran-77, and Fortran-95,and Fortran, are part of the MPI standard.-MPI-2.2, Chapter 1, Introduction, Page 2, line 1 reads*
Allow convenient C, C++, Fortran-77, and Fortran-95 bindings for the interface.
-but should read*
Allow convenient C, C++,
Fortran-77, and Fortran-95,and Fortran bindings for the interface.-MPI-2.2, Chapter 1, Introduction, Page 4, line 34: Add in the new section "1.6 Background of MPI-3.0":*
A new Fortran
mpi_f08
module is introduced to provide extended compile-time argument checking and buffer handling in nonblocking routines. The existingmpi
module provides compile-time argument checking on the basis of existing MPI-2.2 routine definitions (#232-D)**. The use ofmpif.h
is strongly discouraged (#233-E)**.-MPI-2.2, Chapter 2, Terms and Convention, Page 9, line 18 reads*
Some of the major areas of difference are the naming conventions, some semantic definitions, file objects, Fortran 90 vs Fortran 77, C++, processes, and interaction with signals.
-and should not be modified*
-MPI-2.2, Chapter 2, Terms and Convention, Section 2.3 Procedure Specification, page 11, lines 20-23 read*
All MPI functions are first specified in the language-independent notation. Immediately below this, the ISO C version of the function is shown followed by a version of the same function in Fortran and then the C++ binding. Fortran in this document refers to Fortran 90; see Section 2.6.
-but should read*
All MPI functions are first specified in the language-independent notation. Immediately below this, language dependent bindings follow:
USE mpi
orINCLUDE 'mpif.h'
.USE mpi_f08
.-MPI-2.2, Chapter 2, Terms and Conventions, Section 2.6.2 Fortran Binding Issues, page 18, line 6-9 reads*
The MPI Fortran binding is inconsistent with the Fortran 90 standard in several respects. These inconsistencies, such as register optimization problems, have implications for user codes that are discussed in detail in Section 16.2.2. They are also inconsistent with Fortran 77.
-but should read*
The MPI Fortran bindings are
isinconsistent with the Fortran90standard in several respects. These inconsistencies, such as register optimization problems, have implications for user codes that are discussed in detail in Section 16.2.2.They are also inconsistent with Fortran 77.-_MPI-2.2, Section 13.6.7 MPIOffset Type, page 442, lines 14-15 read*
In Fortran, the corresponding integer is an integer of kind MPI_OFFSET_KIND, defined in
mpif.h
and thempi
module.-but should read*
In Fortran, the corresponding integer is an integer with
ofkind parameter MPI_OFFSET_KIND, which is defined inmpif.h
,andthempi
module__, and thempi_f08
module__.'''MPI-2.2, Chapter 16.2, Fortran Support: [[BR]] Section 16.2.1 Overview, MPI-2.2, page 480, lines 23-47 '''
The Fortran MPI-2 language bindings have been designed to be compatible with the Fortran 90 standard (and later). These bindings are in most cases compatible with Fortran 77, implicit-style interfaces.
MPI defines two levels of Fortran support, described in Sections 16.2.3 and 16.2.4. In the rest of this section, "Fortran" and "Fortran 90" shall refer to "Fortran 90" and its successors, unless qualified.
Extended Fortran Support An implementation with this level of Fortran support provides Basic Fortran Support plus additional features that specifically support Fortran 90, as described in Section 16.2.4.
A compliant MPI-2 implementation providing a Fortran interface must provide Extended Fortran Support unless the target compiler does not support modules or KIND- parameterized types.
-together with MPI-2.2, page 488, lines 19-24*
A new set of functions to provide additional support for Fortran intrinsic numeric types, including parameterized types: MPI_SIZEOF, MPI_TYPE_MATCH_SIZE, MPI_TYPE_CREATE_F90_INTEGER, MPI_TYPE_CREATE_F90_REAL and MPI_TYPE_CREATE_F90_COMPLEX. Parameterized types are Fortran intrinsic types which are specified using KIND type parameters. These routines are described in detail in Section 16.2.5.
-together with MPI-2.2, page 489, lines 7-14*
Applications may use either the mpi module or the mpif.h include file. An implementation may require use of the module to prevent type mismatch errors (see below).
It must be possible to link together routines some of which
USE mpi
and others of whichINCLUDE mpif.h
.-but should read (TODO: check whether "the only new language feature" is true)*
The Fortran MPI
-2language bindings have been designed to be generally compatible with the Fortran 90 standard (and later). ~~These bindings are in most cases compatible with Fortran 77, implicit-style interfaces.~~MPI defines
two levelsthree methods of Fortran support: ~~, described in Sections 16.2.3 and 16.2.4. In the rest of this section, "Fortran" and "Fortran 90" shall refer to "Fortran 90" and its successors, unless qualified.~~Application subroutines and functions may use either one of the
mpimodules or the mpif.h include file. An implementation may require use of one of the modules to prevent type mismatch errors~~ (see below)~~.In a single application, it must be possible to link together routines some of which
USE mpi
and others of whichUSE mpi_f08
orINCLUDE mpif.h
.The INTEGER compile-time constant MPI_SUBARRAYS is MPI_SUBARRAYS_SUPPORTED if all choice arguments are defined in explicit interfaces with standardized assumed type and assumed rank, otherwise it equals MPI_SUBARRAYS_UNSUPPORTED. This constant exists with each Fortran support method, but not in the C/C++ header files. The value may be different for each Fortran support method. (#234-F)****
Section 16.2.6 describes additional functionality that is part of the Fortran support. This section defines a
newset of functions to provide additional support for Fortran intrinsic numeric types, including parameterized types. The functions are: MPI_SIZEOF, MPI_TYPE_MATCH_SIZE, MPI_TYPE_CREATE_F90_INTEGER, MPI_TYPE_CREATE_F90_REAL and MPI_TYPE_CREATE_F90_COMPLEX. Parameterized types are Fortran intrinsic types which are specified using KIND type parameters. ~~These routines are described in detail in Section 16.2.5.~~-MPI-2.2, Section 16.2.3 Basic Fortran Support, page 487, line 43 - page 488, line 4, reads*
16.2.3 Basic Fortran Support
Because Fortran 90 is (for all practical purposes) a superset of Fortran 77, Fortran 90 (and future) programs can use the original Fortran interface. The following additional requirements are added:
-but should read *
16.2.3
BasicFortran Support Through the mpif.h Include FileThe use of the mpif.h header file is strongly discouraged (#233-E).
Because Fortran 90 is (for all practical purposes) a superset of Fortran 77, Fortran 90 (and future) programs can use the original Fortran interface. The Fortran bindings are compatible with Fortran 77 implicit-style interfaces in most cases.
The following additional requirements are added:The include filempif.h
must:~~1. Implementations are required to provide the file mpif.h, as described in the original MPI-1 specification.~~
2. mpif.h mustBe valid and equivalent for both fixed- and free- source form.For each MPI routine, an implementation can choose to use an implicit or explicit interface.
-MPI-2.2, Section 16.2.4 Extended Fortran Support, page 488, lines 14-40 read*
16.2.4 Extended Fortran Support
Implementations with Extended Fortran support must provide:
Additionally, high-quality implementations should provide a mechanism to prevent fatal type mismatch errors for MPI routines with choice arguments.
The
mpi
ModuleAn MPI implementation must provide a module named
mpi
that can be used in a Fortran 90 program. This module must:Declare MPI functions that return a value.
An MPI implementation may provide in the
mpi
module other features that enhance the usability of MPI while maintaining adherence to the standard. For example, it may:-but should read*
16.2.4
ExtendedFortran Support Through thempi
ModuleImplementations with Extended Fortran support must provide:~~Additionally, high-quality implementations should provide a mechanism to prevent fatal type mismatch errors for MPI routines with choice arguments.~~
Thempi
ModuleAn MPI implementation must provide a module named
mpi
that can be used in a Fortran90program. This module must:Define all handles as INTEGER. This is refelcted in the first of the two Fortran interfaces in each MPI function definition.
An MPI implementation may provide other features in the
mpi
moduleother featuresthat enhance the usability of MPI while maintaining adherence to the standard. For example, it may:provide INTENT information in these interface blocks.Provide interfaces for all or for a subset of MPI routines.(#232-D)Provide INTENT information in these interface blocks.'''MPI-2.2, Section 16.2.4, page 489, lines 7-14 are removed (they have been already used in Section 16.2.1)'''
-After MPI-2.2, Section 16.2.4, page 489, line 30, the following section is added* (for better readability of this ticket, the following new text is not underlined although it should):
16.2.5 Fortran Support Through the
mpi_f08
ModuleAn MPI implementation must provide a module named
mpi_f08
that can be used in a Fortran program. With this module, new Fortran definitions are added for each MPI routine (#247-S), except for routines that are deprecated (#241-M). This module must:mpi
module). This is reflected in the second of the two Fortran interfaces in each MPI function definition. -(#231-C)*Set the MPI_SUBARRAYS compile-time constant to MPI_SUBARRAYS_UNSUPPORTED and declare choice buffers with a compiler-dependent mechanism that overrides type checking if the underlying Fortran compiler does not support the Fortran 2008 assumed-type and assumed-rank notation. In this case, the use of non-contiguous sub-arrays in nonblocking calls may be restricted as with the
mpi
module. (#234-F)-Advice to implementors.* In this case, the choice argument may be implemented with an explicit interface with compiler directives, for example:
!DEC$ ATTRIBUTES NO_ARG_CHECK :: BUF
[[BR]]!$PRAGMA IGNORE_TKR BUF
[[BR]]REAL, DIMENSION(*) :: BUF
-(End of advice to implementors.) (#234-F) *
status
andarray_of_statuses
output arguments asoptional
through function overloading, instead of usingMPI_STATUS_IGNORE
(#244-P).array_of_errcodes
output arguments asoptional
through function overloading, instead of usingMPI_ERRCODES_IGNORE
(#244-P).Declare all
ierror
output arguments asoptional
, except for user-defined callback functions (e.g., comm_copy_attr_fn) and their predefined callbacks (e.g., MPI_NULL_COPY_FN). (#239-K)-Rationale. For user-defined callback functions (e.g., comm_copy_attr_fn) and their predefined callbacks (e.g., MPI_NULL_COPY_FN), the ierror argument is not optional, i.e., these user-defined functions need not to check whether the MPI library calls these routine with or without an actual ierror output argument. -(End of rationale.) (#239-K)
'''Renumbering of MPI-2.2, Section 16.2.5 in Section 16.2.6, on page 489, line 31.
Impact on Implementations
This module requires mainly
<type> buf(*)
), new MPI datatype handling must be implemented based on the internal Fortran argument descriptor used with the "TYPE(*), DIMENSION(..)
" declarations, for details see Ticket #234-F.Impact on Applications / Users
None, as long they do not use this new module.
If they want to use this new
mpi_f08
module then they must modify:include 'mpif.h'
" or "USE mpi
" with "USE mpi_f08
"INTEGER
handle variables with the newTYPE(MPI_Comm)
, etc. (only if Ticket #231-C is voted in)Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]] yyy.
231-C: Fortran compile-time argument checking with individual handlesSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: 11 yes, 0 no, 5 abstain.
Description
-Major decisions in this ticket:*
mpi_f08
.-Details:*
In principle, there are 3 different solutions. There are several problem areas:
Minimizing the problems of conversion between different handle language bindings:
(A) New derived type consist of exactly one MPI_VAL entry that contains the existing INTEGER value. With this, conversion between old and new Fortran handles are trivial application code:[[BR]]
-
Conversion from old to new: old = new%MPI_VAL [[BR]]-
Conversion from new to old: new%MPI_VAL = old [[BR]] Existing C-Fortran conversion routines can be directly applied to new%MPI_VAL.(B) The new derived type is allowed to contain additional vendor (MPI library) specific data. Conversion from new to old is still trivial (old = new%MPI_VAL), but for the other direction, a conversion function or subroutine is necessary.
(C) No rules about the content of the handle derived types: New Conversion routines between old and new Fortran are necessary, and also between the C handles and the new ones in Fortran.
Based on the advantages and disadvantages shown above, the solution is based on A.
Extended Scope
None.
History
Proposed Solution
-Rule about editing:* [[BR]] For the new Fortran handle types, one should use, e.g.,
\ftype{TYPE(MPI\_Comm)}\cdeclindex{MPI\_Comm}
-MPI-2.2, Chapter 2, Terms and Convention, Section 2.5.1 Opaque Object, page 12, lines 44-47 read*
In Fortran, all handles have type INTEGER. In C and C++, a different handle type is defined for each category of objects. In addition, handles themselves are distinct objects in C++. The C and C++ types must support the use of the assignment and equality operators.
-but should read*
In Fortran__ with
USE mpi
orINCLUDE 'mpif.h'
, all handles have type INTEGER. In Fortran withUSE mpi_f08
, and in C and C++, a different handle type is defined for each category of objects. With FortranUSE mpi_f08
, the handles are defined as Fortran sequenced derived types that consist of only one elementINTEGER :: MPI_VAL
. The internal handle value is identical to the Fortran INTEGER value used in thempi
module and inmpif.h
. The names are identical to the names in C, except that they are not case sensitive. For example:__
In addition, handles themselves are distinct objects in C++. The C and C++ types must support the use of the assignment and equality operators.
-Same section, after the Advice to implementers, MPI-2.2, page 13, line 4 add:*
**Rationale. Due to the sequence attribute in the definition of handles in the
mpi_f08
module, the new Fortran handles are associated with one numerical storage unit, i.e., they have the same C binding as the INTEGER handles of thempi
module. Due to the equivalence of the integer values, applications can easily convert MPI handles between all three supported Fortran methods. For example, an integer communicator handleCOMM
can be converted directly into an exactly equivalentmpi_f08
communicator handle namedcomm_f08
bycomm_f08%MPI_VAL=COMM
, and vice versa. -(End of rationale.)***-MPI-2.2, Chapter 2, Terms and Conventions, Section 2.6.2 Fortran Binding Issues, page 18, line 3 reads*
Handles are represented in Fortran as INTEGERs.
-but should read*
Handles are represented in Fortran as INTEGERs__, or with the
mpi_f08
module as a derived type, see MPI-2.2, Section 2.5.1 on page 12__.-MPI-2.2, Chapter 9, The Info Object, page 299, lines 14-15 read*
Many of the routines in MPI take an argument
info
.info
is an opaque object with a handle of typeMPI_Info
in C,MPI::Info
in C++, andINTEGER
in Fortran.-but should read*
Many of the routines in MPI take an argument
info
.info
is an opaque object with a handle of typeMPI_Info
in C__ and Fortran with thempi_f08
module,MPI::Info
in C++, andINTEGER
in Fortran with thempi
module or the include filempif.h
__.'''MPI-2.2, Section 10.3.2 Starting Processes and Establishing Communication, in the explanation of the argument list of MPI_COMM_SPAWN, MPI-2.2, page 311, lines 39-40 read'''
The info argument The info argument to all of the routines in this chapter is an opaque handle of type MPI_Info in C, MPI::Info in C++ and INTEGER in Fortran.
-but should read*
The info argument The info argument to all of the routines in this chapter is an opaque handle of type MPI_Info in C__ and Fortran with the
mpi_f08
module, MPI::Info in C++ and INTEGER in Fortran with thempi
module or the include filempif.h
__.-MPI-2.2, Section 16.3.4 Transfer of Handles, page 499, lines 1-2 read*
The type definition MPI_Fint is provided in C/C++ for an integer of the size that matches a Fortran INTEGER; usually, MPI_Fint will be equivalent to int.
-but should read*
The type definition MPI_Fint is provided in C/C++ for an integer of the size that matches a Fortran INTEGER; usually, MPI_Fint will be equivalent to int. With the Fortran
mpi
module or thempif.h
include file, a Fortran handle is a Fortran INTEGER value that can be used in the following conversion functions. With the Fortranmpi_f08
module, a Fortran handle is a derived tpye that contains the Fortran INTEGER field MPI_VAL, which contains the the INTEGER value that can be used in the following conversion functions.-Appendix A.1.1 Defined Constants:*
Impact on Implementations
Nearly none, because the same wrappers can be used for the old and the new module (because the C binding of the new and old handles are identical).
Impact on Applications / Users
None, as long they do not use the new
mpi_f08
module.If they want to use this new
mpi_f08
module then they must modify:INTEGER
handle variables by the newTYPE(MPI_Comm)
, etc.Alternative Solutions
See description of this ticket.
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]] yyy.
232-D: Existing module "USE mpi" with compile-time argument checkingSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: 7 yes, 0 no, 11 abstain.
Description
-Major decisions in this ticket:*
mpi
module.-Details:*
It is now required that "
USE mpi
" guarantees compile-time argument checking. Choice arguments (i.e., the buffers) may be handled without compile-time argument checking through a simple call by reference or in-and-out-copy in case of non-contiguous sub-arrays. MPI Handles are still FortranINTEGER
.Extended Scope
None.
History
Proposed Solution
-MPI-2.2, Section 16.2.4, page 489, lines 16-29 read*
No Type Mismatch Problems for Subroutines with Choice Arguments
A high-quality MPI implementation should provide a mechanism to ensure that MPI choice arguments do not cause fatal compile-time or run-time errors due to type mismatch. An MPI implementation may require applications to use the mpi module, or require that it be compiled with a particular compiler flag, in order to avoid type mismatch problems.
-but should read*
Impact on Implementations
The
mpi
module must be implemented with explicit subroutine interfaces for all MPI routines. This can be implemented with most Fortran compilers with the following method:Substitution of all
<type> xxx(*)
by!Intel compiler:
[[BR]]!DEC$ ATTRIBUTES NO_ARG_CHECK :: xxx
[[BR]]!some other compiler:
[[BR]]!$PRAGMA IGNORE_TKR xxx
[[BR]]REAL, DIMENSION(*) :: xxx
with xxx = BUF, BUFFER, BUFFER_ADDR, SENDBUF, RECVBUF, LOCATION, BASE, INBUF, OUTBUF, INOUTBUF, ORIGIN_ADDR
DIMENSION(*)
is omitted.-TODO* Rolf Rabenseifner has a freely usable interface that is directly copied from MPI-2.2.
Impact on Applications / Users
None, as long as the user program is syntactically correct. Current MPI-2.2 already allows compile-time argument checking, therefore portable user programs must be syntactically correct. Users may need to correct syntactically wrong programs if their current MPI-2.2 library has not yet implemented explicit interfaces with compile-time argument checking.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]] yyy.
233-E: Deprecating INCLUDE 'mpif.h'See Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: 9 yes, 0 no, 10 abstain.
Description
-Major decisions in this ticket:*
-Details:*
There isn't any significant further need of
mpif.h
. It can be easily substituted by thempi
module as long as the application uses the MPI interface correctly, because thempi
moduleKnown problems (only one in the moment):
Extended Scope
None.
History
Proposed Solution
-As already mentioned in Ticket #230-B, in the new section "1.6 Background of MPI-3.0":*:
The Fortran include file
mpif.h
is deprecated (#233-E).-As already mentioned in ticket #230-B, MPI-2.2, page 480, Section 16.2.1, line 37-39 are substituted by*
-As already mentioned in ticket #230-B, MPI-2.2, Section 16.2.3, page 487, line 44 is substituted by*
16.2.3
BasicFortran Support Through the mpif.h Include-fileThe mpif.h header file is deprecated.
Impact on Implementations
None.
Impact on Applications / Users
User may need to switch to the
mpi
module due to user-specific rules that require that only features are used that are not in the category "use is strongly discouraged".This requires that
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]] yyy.
234-F: Choice buffers through "TYPE(_), DIMENSION(..)" declarationsSee Ticket #229-A for an _overview* on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: 13 yes, 0 no, 2 abstain.[[BR]] Voting was under the assumption that "TYPE(*), DIMENSION(..)" will have Fortran standard quality.
Description
-Major decisions in this ticket:*
-Details:*
Fortran 2008 will provide assumed type and assumed rank declarations for arguments, i.e.,
TYPE(*), DIMENSION(..)
.Details are explained in http://www.j3-fortran.org/doc/year/08/08-271.txt
With
a wrapper mpi_xxx_f_to_c (implemented in C or Fortran) is called and buf is passed as a pointer to a -Fortran descriptor* as described in http://www.j3-fortran.org/doc/year/08/08-305.txt or later.
Extended Scope
None.
History
The Fortran standardization body strongly works on this topic to provide a solution that explicit interfaces can be provided for all MPI routines including all choice arguments. A positive side effect is that the problems with strided arrays and nonblocking routines can also vanish. For this an implementation effort is necessary.
Proposed Solution
-MPI-2.2, Chapter 2, Terms and Conventions, Section 2.5.5 Choice, page 15, lines 38-42 read*
MPI functions sometimes use arguments with a choice (or union) data type. Distinct calls to the same routine may pass by reference actual arguments of different types. The mechanism for providing such arguments will differ from language to language. For Fortran, the document uses to represent a choice variable;
for C and C++, we use void *.
-but should read*
MPI functions sometimes use arguments with a choice (or union) data type. Distinct calls to the same routine may pass by reference actual arguments of different types. The mechanism for providing such arguments will differ from language to language. For Fortran with the include file to represent a choice variable;
*with the Fortran
mpif.h
or thempi
module, the document usesmpi_f08
module, such arguments are declared with the Fortran 2008 syntax `TYPE(), DIMENSION(..)`;* for C and C++, we use void .**Advice to implementors. The implementor can freely choose how to implement choice arguments in the
mpi
module, e.g., with a non-standard compiler-dependent method that has the quality of the call mechanism in the implicit Fortran interfaces, or with the method defined for thempi_f08
module. -(End of advice to implementors.)***-MPI-2.2, Chapter 2, Terms and Conventions, Section 2.6 Language Binding, page 16, lines 21-22 read*
MPI bindings are for Fortran 90, though they are designed to be usable in Fortran 77 environments.
-but should read*
MPI bindings are for Fortran 90 and later, though they
arewere originally designed to be usable in Fortran 77 environments. With thempi_f08
module, the two Fortran 2008 features assumed type and assumed rank are also required, see MPI-2.2, Section 2.5.5. on page 15.(Comment: MPI-2.2, Section 2.5.5. contains the new choice method
TYPE(*), DIMENSION(..)
, see above.)-MPI-2.2, Chapter 2, Terms and Conventions, Section 2.6.2 Fortran Binding Issues, page 17, lines 37-40 read*
Originally, MPI-1.1 provided bindings for Fortran 77. These bindings are retained, but they are now interpreted in the context of the Fortran 90 standard. MPI can still be used with most Fortran 77 compilers, as noted below. When the term Fortran is used it means Fortran 90.
-but should read*
Originally, MPI-1.1 provided bindings for Fortran 77. These bindings are retained, but they are now interpreted in the context of the Fortran 90 standard. MPI can still be used with most Fortran 77 compilers, as noted below. When the term Fortran is used it generally means Fortran 90 and later__; it means Fortran 2008 and later if the
mpi_f08
module is used__.Text related to this ticket but shown in Ticket #230-B:
-In Section 16.2.1 Overview:*
The INTEGER compile-time constant MPI_SUBARRAYS equals MPI_SUBARRAYS_SUPPORTED if all choice arguments are defined in explicit interfaces with standardized assumed type and assumed rank, otherwise it equals MPI_SUBARRAYS_UNSUPPORTED. This constant exists with each Fortran support method, but not in the C/C++ header files. The value may be different for each Fortran support method. (#234-F)****
-_In new Section 16.2.5 Fortran Support Through the mpif08 Module:*
Set the MPI_SUBARRAYS compile-time constant to MPI_SUBARRAYS_UNSUPPORTED and declare choice buffers with a compiler-dependent mechanism that overrides type checking if the underlying Fortran compiler does not support the Fortran 2008 assumed-type and assumed-rank notation. In this case, the use of non-contiguous sub-arrays in nonblocking calls may be restricted as with the
mpi
module. (#234-F)Advice to implementors. In this case, the choice argument may be implemented with an explicit interface with compiler directives, for example:
!DEC$ ATTRIBUTES NO_ARG_CHECK :: BUF
[[BR]]!$PRAGMA IGNORE_TKR BUF
[[BR]]REAL, DIMENSION(*) :: BUF
(End of advice to implementors.) (#234-F)****
Text related to this ticket but shown in Ticket #232-D:
-In Section 16.2.4 Fortran Support through the mpi Module:*
In this case, the compile-time constant MPI_SUBARRAYS equals MPI_SUBARRAYS_UNSUPPORTED \ (#234-F).**
'''See also Tickets #247-S and #249-U.
Impact on Implementations
This ticket has major impact on existing MPI implementations, because the handling of choice buffer arguments must be reimplemented. It is definitely different from the existing C (void *) interface. The buffer description is now a combination of the Fortran sub-array argument handling (i.e., non-contiguous sub-arrays) through an array descriptor and the MPI derived datatype handles. The MPI derived datatype handles apply to a virtual contiguous memory area that is built out of the portions defined in the Fortran array descriptor.
Impact on Applications / Users
Removal of all restrictions with the usage of Fortran array triplet-subscripts (e.g.,
a(1:100:3)
) together with MPI nonblocking routines, but not with vector-subsripts (e.g.,a([1,7,8,17,97])
).Alternative Solutions
None.
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]] yyy.
235-G: Corrections to "Problems with Fortran Bindings" (MPI-2.2 p.481) and "Problems Due to Strong Typing" (p.482)See Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
No votes upto now, because no major decision within this ticket.
Description
-Major decisions in this ticket:*
-Details:*
The problems due to strong typing are partially solved by the new module mpi_f08. The hints must therefore now differentiate between the Fortran support methods. With the scalar versus array problem, the example is modified, because with choice buffers, the problem is normally solved.
Extended Scope
None.
History
Proposed Solution
-MPI-2.2, Section 16.2.2 Problems With Fortran Bindings for MPI, page 481, lines 11-12 read*
It supersedes and replaces the discussion of Fortran bindings in the original MPI specification (for Fortran 90, not Fortran 77).
-and should be removed*
~~It supersedes and replaces the discussion of Fortran bindings in the original MPI specification (for Fortran 90, not Fortran 77).~~
-MPI-2.2, Section 16.2.2 Problems With Fortran Bindings for MPI, page 481, lines 14-15 read*
-but should read*
-MPI-2.2, Section 16.2.2, Subsection "Problems Due to Strong Typing", page 482, lines 11-14 read*
All MPI functions with choice arguments associate actual arguments of different Fortran datatypes with the same dummy argument. This is not allowed by Fortran 77, and in Fortran 90 is technically only allowed if the function is overloaded with a different function for each type. In C, the use of void* formal arguments avoids these problems.
-but should read*
All MPI functions with choice arguments associate actual arguments of different Fortran datatypes with the same dummy argument. This is not allowed by Fortran 77, and in Fortran 90 is technically only allowed if the function is overloaded with a different function for each type. In C, the use of void* formal arguments avoids these problems. *Similar to C, with Fortran 2008 and later together
mpi_f08
module, the problem is avoided by declaring choice arguments with TYPE(), DIMENSION(..), i.e., as assumed type and assumed rank dummy arguments.**-MPI-2.2, Section 16.2.2, Subsection "Problems Due to Strong Typing", page 482, lines 15-24 read*
The following code fragment is technically illegal and may generate a compile-time error.
In practice, it is rare for compilers to do more than issue a warning, though there is concern that Fortran 90 compilers are more likely to return errors.
-but should read*
Using
INCLUDE mpif.h
, theThefollowing code fragmentismight technicallyillegalbe invalid and may generate a compile-time error.In practice, it is rare for compilers to do more than issue a warning~~, though there is concern that Fortran 90 compilers are more likely to return errors~~. Using the
mpi_f08
ormpi
module, the problem is usually resolved through the standardized assume-type and assume-rank declarations of the dummy arguments, or with non-standard Fortran options preventing type checking for choice arguments.-MPI-2.2, Section 16.2.2, Subsection "Problems Due to Strong Typing", page 482, lines 25-30 read*
It is also technically illegal in Fortran to pass a scalar actual argument to an array dummy argument. Thus the following code fragment may generate an error since the buf argument to MPI_SEND is declared as an assumed size array buf(*).
-but should read*
It is also technically
illegalinvalid in Fortran to pass a scalar actual argument to an array dummy argument. Thus__, when using the module mpi or mpi_f08, the following code fragmentmayusually generates an error since thebufdims and periods arguments to MPI_SENDCART_CREATEisare declared asanassumed size arrays_ buf(*) **INTEGER DIMS() and LOGICAL PERIODS(_)**.Using the deprecated INCLUDE 'mpif.h', compiler warnings are not expected except if this include file also uses Fortran explicit interfaces.
-MPI-2.2, Section 16.2.2, Subsection "Problems Due to Strong Typing", page 482, lines 31-38 read*
-and should be removed*
Impact on Implementations
None.
Impact on Applications / Users
None.
Alternative Solutions
Entry for the Change Log
None.
236-H: Corrections to "Problems Due to Data Copying and Sequence Association" (MPI-2.2 page 482) See Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
No votes upto now, because no major decision within this ticket.
Description
-Major decisions in this ticket:*
-Details:*
Extended Scope
None.
History
Proposed Solution
-MPI-2.2, Section 16.2.2, Subsection "Problems Due to Data Copying and Sequence Association", page 482, lines 41 - page 484, line 18 reads*
Implicit in MPI is the idea of a contiguous chunk of memory accessible through a linear[[BR]] ...[[BR]] compiler cannot be used for applications that use memory references across subroutine calls as in the example above.
-but should read*
If MPI_SUBARRAYS equals MPI_SUBARRAYS_SUPPORTED: (for better readability of this ticket, the following new text is not underlined although it should)
Choice buffer arguments are declared as TYPE(*), DIMENSION(..). For example, considering the following code fragment:
REAL s(100), r(100)[[BR]] CALL MPI_Isend(s(1:100:5), 3, MPI_REAL, ..., rq, ierror)[[BR]] CALL MPI_Wait(rq, status, ierror)[[BR]] CALL MPI_Irecv(r(1:100:5), 3, MPI_REAL, ..., rq, ierror)[[BR]] CALL MPI_Wait(rq, status, ierror)
In this case, the individual elements s(1), s(6), s(11), etc. are sent between the start of MPI_ISEND and the end of MPI_WAIT even though the compiled code may not copy s(1:100:5) to a contiguous temporary scratch buffer. Instead, the compiled code may pass a descriptor to MPI_ISEND that allows MPI to operate directly on s(1), s(6), s(11), ..., s(96).
All non-blocking MPI communication functions behave as if the user-specified elements of choice buffers are copied to a contiguous scratch buffer in the MPI runtime environment. All datatype descriptions (in the example above, "3, MPI_REAL") read and store data from and to this virtual contiguous scratch buffer. Displacements in MPI derived datatypes are relative to the beginning of this virtual contiguous scratch buffer. Upon completion of a non-blocking receive operation (e.g., when MPI_WAIT on a corresponding MPI_Request returns), it is as if the received data has been copied from the virtual contiguous scratch buffer back to the non-contiguous application buffer. In the example above, r(1), r(6), and r(11) will be filled with the received data when MPI_WAIT returns.
-Advice to implementors.* The Fortran descriptor for TYPE(), DIMENSION(..) arguments contains enough information that the MPI library can make a real contiguous copy of non-contiguous user buffers. Efficient implementations may avoid such additional memory-to-memory data copying. -(End of advice to implementors.)
-Rationale. If MPI_SUBARRAYS equals MPI_SUBARRAYS_SUPPORTED, non-contiguous buffers are handled inside of the MPI library instead of by the compiled user code. Therefore the scope of scratch buffers can be from the beginning of a non-blocking operation until the completion of the operation although beginning and completion are implemented in different routines. If MPI_SUBARRAYS equals MPISUBARRAYSUNSUPPORTED, such scratch buffers can be organized only by the compiler for the duration of the non-blocking call, which is too short for implementing the whole MPI operation. -(End of rationale.)
If MPI_SUBARRAYS equals MPISUBARRAYSUNSUPPORTED:
Implicit in MPI is the idea of a contiguous chunk of memory accessible through a linear[[BR]] ...[[BR]] compiler cannot be used for applications that use memory references across subroutine calls as in the example above.
Impact on Implementations
None. (This is only a descriptive ticket.)
Impact on Applications / Users
None.
Alternative Solutions
Entry for the Change Log
None.
237-I: Corrections to problems due to "Fortran 90 Derived Types" (MPI-2.2 page 484)See Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
No votes upto now, because no major decision within this ticket.
Description
-Major decisions in this ticket:*
-Details:*
This section is currently wrong.
Extended Scope
None.
History
Proposed Solution
-MPI-2.2, Section 16.2.2, Subsection "Fortran 90 Derived Types", page 484, lines 34 - page 485, line 3 reads*
Fortran 90 Derived Types
MPI does not explicitly support passing Fortran 90 derived types to choice dummy arguments. Indeed, for MPI implementations that provide explicit interfaces through the mpi module a compiler will reject derived type actual arguments at compile time. Even when no explicit interfaces are given, users should be aware that Fortran 90 provides no guarantee of sequence association for derived types or arrays of derived types. For instance, an array of a derived type consisting of two elements may be implemented as an array of the first elements followed by an array of the second. Use of the SEQUENCE attribute may help here, somewhat.
The following code fragment shows one possible way to send a derived type in Fortran. The example assumes that all data is passed by address.
-but should read*
Fortran
90Derived TypesMPI does
notexplicitly support passing Fortran90sequence derived types to choice dummy arguments, but does not support Fortran non-sequence derived types. ~~Indeed, for MPI implementations that provide explicit interfaces through the mpi module a compiler will reject derived type actual arguments at compile time. Even when no explicit interfaces are given, users should be aware that Fortran 90 provides no guarantee of sequence association for derived types or arrays of derived types. For instance, an array of a derived type consisting of two elements may be implemented as an array of the first elements followed by an array of the second. Use of the SEQUENCE attribute may help here, somewhat.~~The following code fragment shows one possible type that can be used to send a sequence derived type in Fortran.
-MPI-2.2, Section 16.2.2, Subsection "Fortran 90 Derived Types", page 485, lines 29 - 31 read*
-but should read* (comment and %i removed)
Impact on Implementations
None.
Impact on Applications / Users
The user can learn, that he/she can use Fortran sequence derived types. This was possible in the past. Only this wrong advice could prevent users from using this MPI feature.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section 16.2.2 on page 481.[[BR]] Fortran sequence derived types can be used for buffers. The section on Fortran derived types was therefore modified.
238-J: Corrections to "Registers and Compiler Optimizations" (MPI-2.2 page 371) and "A Problem with Register Optimization" (page 485)See Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: 6 yes, 0 no, 9 abstain.[[BR]] With the comment: "or any future method".
Description
-Major decisions in this ticket:*
-Details:*
Citing from the Fortran 2008 standard:
5.3.4 ASYNCHRONOUS attribute
An entity with the ASYNCHRONOUS attribute is a variable that may be subject to asynchronous input/output. The base object of a variable shall have the ASYNCHRONOUS attribute in a scoping unit if
any statement of the scoping unit is executed while the variable is a pending I/O storage sequence affector (9.6.2.5).
5.3.17 TARGET attribute
The TARGET attribute specifies that a data object may have a pointer associated with it (7.2.2). An object without the TARGET attribute shall not have a pointer associated with it.
5.3.19 VOLATILE attribute
The VOLATILE attribute specifies that an object may be referenced, defined, or become undefined, by means not specified by the program. A pointer with the VOLATILE attribute may additionally have its association status, dynamic type and type parameters, and array bounds changed by means not specified by the program. An allocatable object with the VOLATILE attribute may additionally have its allocation status, dynamic type and type parameters, and array bounds changed by means not specified by the program.
Extended Scope
None.
History
Proposed Solution
-MPI-2.2, Chapter 11, One-sided communications, Section 11.7.3 Registers and Compiler Optimizations, page 372, lines 1-9 read*
MPI implementations will avoid this problem for standard conforming C programs. Many Fortran compilers will avoid this problem, without disabling compiler optimizations. However, in order to avoid register coherence problems in a completely portable manner, users should restrict their use of RMA windows to variables stored in COMMON blocks, or to variables that were declared VOLATILE (while VOLATILE is not a standard Fortran declaration, it is supported by many Fortran compilers). Details and an additional solution are discussed in Section 16.2.2, "A Problem with Register Optimization," on page 485. See also, "Problems Due to Data Copying and Sequence Association," on page 482, for additional Fortran problems.
-but should read*
MPI implementations will avoid this problem for standard conforming C programs. Many Fortran compilers will avoid this problem, without disabling compiler optimizations. However, in order to avoid register coherence problems in a completely portable manner, users should restrict their use of RMA windows to variables stored in modules or in COMMON blocks, or to variables that were declared VOLATILE (but this attribute may inhibit optimization of any code containing the RMA window)
(while VOLATILE is not a standard Fortran declaration, it is supported by many Fortran compilers). Further dDetails andanadditional solutions are discussed in Section 16.2.2, "A Problem with Register Optimization," on page 485. See also, "Problems Due to Data Copying and Sequence Association," on page 482, for additional Fortran problems.'''MPI-2.2, Section 16.2.2, Subsection "A Problem with Register Optimization", page 485, lines 34-42 read
A Problem with Register Optimization
MPI provides operations that may be hidden from the user code and run concurrently with it, accessing the same memory as user code. Examples include the data transfer for an MPI_IRECV. The optimizer of a compiler will assume that it can recognize periods when a copy of a variable can be kept in a register without reloading from or storing to memory. When the user code is working with a register copy of some variable while the hidden operation reads or writes the memory copy, problems occur. This section discusses register optimization pitfalls.
-but should read*
Problems with Register Optimization and Temporary Memory Modifications
MPI provides operations that may be hidden from the user code and run concurrently with it, accessing the same memory as user code. Examples include the data transfer for an MPI_IRECV. The optimizer of a compiler will assume that it can recognize periods when a copy of a variable can be kept in a register without reloading from or storing to memory. When the user code is working with a register copy of some variable while the hidden operation reads or writes the memory copy, problems occur. This section discusses register optimization pitfalls and problems with temporary memory modifications. These problems are independent of the Fortran support method, i.e., they occur with the
mpi_f08
module, thempi
module, and themif.h
include file.(for better readability of this ticket, the following new text is not underlined although it should)
This section shows four problematic usage areas (the abbrevations in parentheses are used in the table below):
Use of MPIBOTTOM together with absolute displacements in MPI datatypes, or relative displacements between to variables in such datatypes (Bottom)_.
The compiler is allowed to cause two optimization problems
Temporary memory modifications (Memory).
The optimization problems can occur not in all usage areas:
The application writer has several methods to circumvent parts of these problems with special declarations for the used send and receive buffers:
Usage of the Fortran VOLATILE attribute.
Each of these methods may solve only a subset of the problems, may have more or less performance drawbacks, and may be usable not in each application context.
The following table shows the usability of each method:
The next paragraphs describe the problems in detail.
'''MPI-2.2, Section 16.2.2, Subsection "A Problem with Register Optimization", page 485, lines 43-48 read
When a variable is local to a Fortran subroutine (i.e., not in a module or COMMON block), the compiler will assume that it cannot be modified by a called subroutine unless it is an actual argument of the call. In the most common linkage convention, the subroutine is expected to save and restore certain registers. Thus, the optimizer will assume that a register which held a valid copy of such a variable before the call will still hold a valid copy on return.
-but should read*
\paragraph{Nonblocking operations and register optimization / code movement.} When a variable is local to a Fortran subroutine (i.e., not in a module or COMMON block), the compiler will assume that it cannot be modified by a called subroutine unless it is an actual argument of the call. In the most common linkage convention, the subroutine is expected to save and restore certain registers. Thus, the optimizer will assume that a register which held a valid copy of such a variable before the call will still hold a valid copy on return.
'''MPI-2.2, Section 16.2.2, Subsection "A Problem with Register Optimization", page 486, lines 28-42 read
Example 16.12 shows extreme, but allowed, possibilities.
Example 16.12 Fortran 90 register optimization – extreme.
MPI_WAIT on a concurrent thread modifies buf between the invocation of MPI_IRECV and the finish of MPI_WAIT. But the compiler cannot see any possibility that buf can be changed after MPI_IRECV has returned, and may schedule the load of buf earlier than typed in the source. It has no reason to avoid using a register to hold buf across the call to MPI_WAIT. It also may reorder the instructions as in the case on the right.
-and should be moved after page 485, line 48 and the following line should be added before the IRECV line:*
-After this example, the following new example should be added* (for better readability of this ticket, the following new text is not underlined although it should):
Example 16.12(new) Similar example with MPI_ISEND
Due to allowed code movement, the content of
buf
may be already overwritten when sending of the content ofbuf
is executed. The code movement is permitted, because the compiler cannot detect a possible access to buf in MPI_WAIT (or in a second thread between the start of MPI_ISEND and the end of MPI_WAIT).Note, that code movement can also be executed across subroutine boundaries when subroutines or functions are inlined.
This register optimization / code movement problem does not occur with MPI parallel file I/O split collective operations, because in the
..._BEGIN
and..._END
calls, the same buffer has to be provided as actual argument.-After this example, the following new paragraph should be added* (for better readability of this ticket, the following new text is not underlined although it should):
\paragraph{Nonblocking operations and temporary memory modifications.} The compiler is allowed to modify temporarily data in the memory. Example 16.xx shows a possibility.
Example 16.xx Overlapping Communication and Computation
The compiler may substitute the nested loops through loop fusion by
with buf_1dim(10000) as the 1-dimensional equivalence of buf(100,100). The nonblocking receive may receive the data in the boundary
buf(1,1:100)
while the fused loop is using temporarily this part of the buffer. When thetmp
data is written back tobuf
, the old data is restored and the received data is lost.Note, that this problem occurs also
and also with MPI parallel file I/O split collective operations with the local buffer between the
..._BEGIN
and..._END
call.This type of compiler optimization can be prevented when buf is declared with the Fortran attribute
ASYNCHRONOUS
:\paragraph{One-sided communication.} An example with instruction reordering due to register optimization can be found in Section 11.7.3 on page 371.
'''MPI-2.2, Section 16.2.2, Subsection "A Problem with Register Optimization", page 486, lines 1-27 read
Normally users are not afflicted with this. But the user should pay attention to this [[BR]] ... [[BR]] and MPI_BOTTOM.
-but should read*
\paragraph{MPI_BOTTOM and combining independent variables in datatypes.} Normally users are not afflicted with this. But the user should pay attention to this [[BR]] ... [[BR]] and MPI_BOTTOM.
-After these paragraphs, the following paragraphs should be added* (for better readability of this ticket, the following new text is not underlined although it should):
Example 16.11(new) Similar example with MPI_SEND
Several successive assigments to the same variable can be combined in this way, but only the last assignment is executed. Successive means that no interfering read access to this variable is in between. The compiler cannot detect that the call to MPI_SEND statement is interfering, because the read access to buf is hidden by the usage of MPI_BOTTOM.
\paragraph{Solutions.} The following paragraphes show in detail how these problems can be solved in portabel way. Several solutions are presented, because all of these solutions have different implication on the performance. Only one solution (with VOLATILE) solves all problems, but it may have the most negative impact on the performance.
\paragraph{Fortran ASYNCHRONOUS attribute.} Declaring a buffer with the Fortran
ASYNCHRONOUS
attribute in a scoping unit (orBLOCK
) tells the compiler that any statement of the scoping unit may be executed while the buffer is affected by a pending asynchronous input/output operation. Each library call (e.g., to an MPI routine) within the scoping unit may contain a Fortran asynchronous I/O statement, e.g., the Fortran WAIT statement.In the case of nonblocking MPI communication, the send and receive buffers should be declared with the Fortran
ASYNCHRONOUS
attribute within each scoping unit (orBLOCK
) where the buffers are declared and statements are executed between the start (e.g., MPI_IRECV) and completion (e.g., MPI_WAIT) of the nonblocking communication. DeclaringREAL, ASYNCHRONOUS :: buf
in Examples 16.12 and 16.12(new), andREAL, ASYNCHRONOUS :: buf(100,100)
in Examples 16.xx solves the register optimization and temporary memory modification problem.-Rationale. A combination of a nonblocking MPI communication call with a buffer in the argument list together with a subsequent call to
MPI_WAIT
orMPI_TEST
is similar to the combination a of Fortran asynchronous read or write together with the matching Fortran wait statement. To prevent incorrect register optimizations or code movement, the Fortran standard requires in the case of Fortran IO, that the ASYNCHRONOUS attribute is defined for the buffer. The ASYNCHONOUS attribute also works with the asynchronous MPI routines because the compiler must expect that inside of the MPI routines such Fortran asynchronous read, write, or wait routines may be called. -(End of rationale.)In the Examples 16.11 and 16.11(new) and also in the example in Section 11.7.3 on page 371, the ASYNCHRONOUS attribute may also help but the help is not guaranteed because there is not an IO counterpart to the MPI usage.
-Rationale. In case of using
MPI_BOTTOM
or one-sided synchronizations (e.g.,MPI_WIN_FENCE
), the buffer is not specified, i.e., those calls can include only a FortranWAIT
statement (or another routine that finishes an asynchronous IO). Additionally, with Fortran asynchronous IO, it is a clear and forbidden race-condition when storing new data into the buffer while an asynchronous IO is active. Exactly this storing of data into the buffer is done in Example 16.11 when there would have been an initializationbuf=val_init
prior to the call toMPI_RECV
, or in Example 16.11(new), the statementbuf=val_new
. -(End of rationale.)\paragraph{Fortran TARGET attribute.} Declaring a buffer with the Fortran
TARGET
attribute in a scoping unit (orBLOCK
) tells the compiler that any statement of the scoping unit may be executed while some pointer to the buffer exist. Calling a library routine (e.g., an MPI routine) may imply that such a pointer is used to modify the buffer.REAL, TARGET :: buf
solves the register optimization problem in Examples 16.12, 16.12(new), 16.11, and 16.11(new).TARGET
attribute.-MPI-2.2, Section 16.2.2, Subsection "A Problem with Register Optimization", page 486, line 43 - page 487, lines 25 reads*
To prevent instruction reordering or the allocation of a buffer in a register there are two possibilities in portable Fortran code:
The compiler may be prevented from moving a reference to a buffer across a call to an MPI subroutine by surrounding the call by calls to an external subroutine with the buffer as an actual argument. Note that if the intent is declared in the external subroutine, it must be OUT or INOUT. The subroutine itself may have an empty body, but the compiler does not know this and has to assume that the buffer may be altered. For example, the above call of MPI_RECV might be replaced by
with the separately compiled
(assuming that buf has type INTEGER). The compiler may be similarly prevented from moving a reference to a variable across a call to an MPI subroutine.
In the case of a nonblocking call, as in the above call of MPI_WAIT, no reference to the buffer is permitted until it has been verified that the transfer has been completed. Therefore, in this case, the extra call ahead of the MPI call is not necessary, i.e., the call of MPI_WAIT in the example might be replaced by
-but should read*
\paragraph{Calling MPI_F_SYNC_REG.} ~~To prevent instruction reordering or the allocation of a buffer in a register there are two possibilities in portable Fortran code:~~ [[BR]] The compiler may be prevented from moving a reference to a buffer across a call to an MPI subroutine by surrounding the call by calls to an external subroutine with the buffer as an actual argument. The MPI library provides MPI_F_SYNC_REG for this purpose, see Section 16.2.5(new) on page 489.
(for better readability of this ticket, the following new text is not underlined although it should)
MPI_F_SYNC_REG(buf)
once directly after MPI_WAIT.The call MPI_F_SYNC_REG(buf) prevents moving the last line before the MPI_WAIT call. Further calls to
MPI_F_SYNC_REG(buf)
are not needed, because it is still correct if the additional read accesscopy=buf
is moved behindMPI_WAIT
and beforebuf=val_overwrite
.call MPI_F_SYNC_REG(buf)
, one directly before MPI_RECV/MPI_SEND, and one directly after this communication operation.The call to
MPI_F_SYNC_REG(buf)
is needed to finish all load and store references tobuf
prior toMPI_RECV/SEND
, and the second call is needed to assure that the subsequent access tobuf
are not moved beforeMPI_RECV/SEND
.bbbb
must be protected similar to Example 16.12, i.e., a call toMPI_F_SYNC_REG(bbbb)
is needed after the secondMPI_WIN_FENCE
to guarantee that further accesses tobbbb
are not moved ahead of the call toMPI_WIN_FENCE
. In Process 2, both calls toMPI_WIN_FENCE
together act as a communication call withMPI_BOTTOM
as the buffer, i.e., before the first fence and after the second fence, a call toMPI_F_SYNC_REG(buff)
is needed to guarantee that accesses tobuff
are not moved after or ahead of the calls toMPI_WIN_FENCE
. UsingMPI_GET
instead ofMPI_PUT
, the same calls toMPI_F_SYNC_REG
are necessary.The temporary memory modification problem, i.e., Example 16.xx, can not be solved with this method.
\paragraph{A user defined DD instead of MPI_F_SYNC_REG.} Instead of MPI_F_SYNC_REG, one can use also a user defined external subroutine, which is separately compiled:
Note that if the intent is declared in the external subroutine, it must be OUT or INOUT. The subroutine itself may have an empty body, but the compiler does not know this and has to assume that the buffer may be altered. For example, the above call of MPI_RECV might be replaced by
-Section 16.2.2, Subsection "A Problem with Register Optimization" MPI-2.2, page 487, lines 26-31 read*
-but should read*
\paragraph{Module data and COMMON blocks.} An alternative is to put the buffer or variable into a module or a common block and access it through a USE or COMMON statement in each scope where it is referenced, defined or appears as an actual argument in a call to an MPI routine. The compiler will then have to assume that the MPI procedure (MPI_RECV in the above example) may alter the buffer or variable, provided that the compiler cannot analyze that the MPI procedure does not reference the module or common block.
-Section 16.2.2, Subsection "A Problem with Register Optimization" MPI-2.2, page 487, lines 33-35 read*
The VOLATILE attribute, available in later versions of Fortran, gives the buffer or variable the properties needed, but it may inhibit optimization of any code containing the buffer or variable.
-but should read*
\paragraph{Fortran VOLATILE attribute.} The VOLATILE attribute,
available in later versions of Fortran,gives the buffer or variable the properties needed, but it may inhibit optimization of any code containing the buffer or variable.'''MPI-2.2, before Section 16.2.5 "Additional Support for Fortran Numeric Intrinsic Types", on page 489, line 31 the following new section is added,''' i.e., "16.2.5(new)" means "16.2.5" and all subsequent existing sections are renumbered (for better readability of this ticket, the following new text is not underlined although it should):
16.2.5(new) Additional Support for Fortran Register-Memory-Synchronization
As described in Section "A Problem with Register Optimization" on page 485, a dummy call is needed to tell the compiler that registers are to be flushed for a given buffer. It is a generic Fortran routine and has a Fortran binding only.
This routine has no operation associated with. It must be compiled in the MPI library in the way that a Fortran compiler cannot detect in the module that the routine has an empty body. It is used only to tell the compiler that a cached register value of a variable or buffer should be flushed, i.e., stored back to the memory (when necessary) or invalidated.
-Rationale.* This function is not available in other languages because it would not be useful. This routine has not an ierror return argument because there isn't any operation that can detect an error. -(End of rationale.)*
-Advice to implementors. It is recommended to bind this routine to a C routine to minimize the risk that the fortran compiler can learn that this routine is empty, i.e., that the compiler can learn that a call to this routine can be removed as part of the automated optimization. -(End of advice to implementors.)
-Page 499, Example 16.13 and all following examples are renumbered to 16.14 ...*
Impact on Implementations
Impact on Applications / Users
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]] yyy.
239-K: IERROR optionalSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: 14 yes, 0 no, 1 abstain.
Description
-Major decisions in this ticket:*
-Details:*
For user-defined callback functions (e.g., comm_copy_attr_fn) and their predefined callbacks (e.g., MPI_NULL_COPY_FN), ierror should not be optional, i.e., these user-defined functions should not need to check whether the MPI library calls these routine with or without an actual ierror output argument.
Extended Scope
None.
History
Since Fortran 90/95, the
OPTIONAL
attribute can be specified for dummy arguments.If only the last argument is optional, then the routine can be called with and without this last argument, i.e., only by using positional arguments and without the need of using a keyword argument (as in
a3=33
in the second call).Proposed Solution
-MPI-2.2, Section 2.6.2 Fortran Binding Issues, page 17, line 17 reads*
All MPI Fortran subroutines have a return code in the last argument.
-but should read*
All MPI Fortran subroutines have a return code in the last argument. With
USE mpi_f08
, this last argument is declared asOPTIONAL
, except for user-defined callback functions (e.g., comm_copy_attr_fn) and their predefined callbacks (e.g., MPI_NULL_COPY_FN).Text related to this ticket but shown in Ticket #230-B:
-_In new Section 16.2.5 Fortran Support through Module mpif08:*
All
ierror
output arguments are declared asoptional
, except for user-defined callback functions (e.g., comm_copy_attr_fn) and their predefined callbacks (e.g., MPI_NULL_COPY_FN). (#239-K)-Rationale. For user-defined callback functions (e.g., comm_copy_attr_fn) and their predefined callbacks (e.g., MPI_NULL_COPY_FN), the ierror argument is not optional, i.e., these user-defined functions should not need to check whether the MPI library calls these routine with or without an actual ierror output argument. -(End of rationale.) (#239-K)
Impact on Implementations
The wrapper from Fortran to C must check whether an actual IERROR argument is provided by the calling Fortran application, and only in this case, the ierror output from the C MPI routine can and must be returned into the actual IERROR argument.
Impact on Applications / Users
For existing applications, there is no impact. In modified or newly written applications, the actual IERROR argument can be omitted.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]] yyy.
240-L: New syntax used in all three (mpif.h, mpi, mpi_f08) See Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: 10 yes, 0 no, 0 abstain.
Description
-Major decisions in this ticket:*
INTEGER
::MPI_VERSION
-Details:*
This is only a modernization, without any compatibility issues.
Extended Scope
None.
History
Proposed Solution
-MPI-2.2, Section 8.1.1 Version Inquiries, page 271, lines 33-35 read*
INTEGER MPI_VERSION, MPI_SUBVERSION
[[BR]]PARAMETER (MPI_VERSION = 2)
[[BR]]PARAMETER (MPI_SUBVERSION = 2)
-but should read*
INTEGER
::MPI_VERSION, MPI_SUBVERSION
[[BR]]PARAMETER (MPI_VERSION = 2)
[[BR]]PARAMETER (MPI_SUBVERSION = 2)
Impact on Implementations
None.
Impact on Applications / Users
None.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]] yyy.
241-M: Not including old deprecated routines from MPI-2.0 - MPI-2.2See Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: yes by acclamation.
Description
-Major decisions in this ticket:*
-Details:*
With this ticket, the Forum should decide that deprecated routines will not get the new Fortran 2008 bindings.
There are no technical reasons for not providing these routines, because normally, there isn't any difference between the C backend of module
mpi
andmpi_f08
.Extended Scope
None.
History
Proposed Solution
No changes to Section 15.1 "Deprecated since MPI-2.0" and Section 15.2. "Deprecated since MPI-2.2"
Text related to this ticket but shown in Ticket #230-B:
-_In new Section 16.2.5 Fortran Support through Module mpif08:*
Impact on Implementations
None.
Impact on Applications / Users
With a switch to module mpi_f08, the deprecated routines must be substituted by non-deprecated routines.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]] yyy.
242-N: Arguments with INTENT=IN, OUT, INOUTSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: 11 yes, 0 no, 3 abstain.
Description
-Major decisions in this ticket:*
-Details:*
Most problems are already described in MPI-2.2, MPI-2.2, Chapter 2, Terms and Convention, Section 2.3 Procedure Specification, especially on page 10, lines 41 - page 11 line 5:
MPI's use of IN, OUT and INOUT is intended to indicate to the user how an argument is to be used, but does not provide a rigorous classification that can be translated directly into all language bindings (e.g.,
INTENT
in Fortran 90 bindings or const in C bindings). For instance, the "constant"MPI_BOTTOM
can usually be passed to OUT buffer arguments. Similarly,MPI_STATUS_IGNORE
can be passed as the OUT status argument.A common occurrence for MPI functions is an argument that is used as IN by some processes and OUT by other processes. Such an argument is, syntactically, an INOUT argument and is marked as such, although, semantically, it is not used in one call both for input and for output on a single process.
Another frequent situation arises when an argument value is needed only by a subset of the processes. When an argument is not significant at a process then an arbitrary value can be passed as an argument.
Ticket #247-S and #248-T shows therefore for each MPI routine the appropriate decisions. Ticket #247-S and #248-T must be carefully checked.
Extended Scope
None.
History
Since Fortran 90/95, the attributes
INTENT(IN)
,INTENT(OUT)
, orINTENT(INOUT)
, can be specified for dummy arguments.Proposed Solution
The solution is implemented only in the Fortran routine definitions. Text about this tickets are shown in other tickets, see Tickets #230-B and #249-U.
The Fortran attribute
INTENT(IN)
is used for all arguments that are IN arguments in the language-independent notation.For OUT or INOUT arguments in the language-independent notation, the Fortran attributes
INTENT(OUT)
orINTENT(INOUT)
are used, with following exceptions:If there exists a constant that can be provided as an actual argument, then an
INTENT
attribute is not specified. [[BR]] Examples:array_of_errcodes
in MPI_Comm_spawn(_multiple) as optional (through function overloading), then thearray_of_errcodes
arguments will have INTENT(OUT).(The constants MPI_UNWEIGHTED in MPI_Dist_graph_create(_adjacent), MPI_ARV_NULL, and ARGVS_NULL do not cause a problem, because they are used in INTENT(IN) arguments.)
INTENT(IN)
is specified.[[BR]] Example:New text:
'''Append a new paragraph in MPI-2.2, Section 16.2 "Fortran Support", Subsection 16.2.2 "Problems with Fortran bindings for MPI", at the end of Subsubsection "Special Constants" on page 484, line 33:'''
With
USE mpi_f08
, the attributesINTENT(IN)
,INTENT(OUT)
, andINTENT(INOUT)
are used in the Fortran interface. In most casesINTENT(IN)
is used if the C interface uses call-by-value. For all buffer arguments and for OUT dummy arguments that allow one of these special constants as input, anINTENT(...)
is not specified.Text related to this ticket but shown in Ticket #230-B:
-_In new Section 16.2.5 Fortran Support through Module mpif08:*
Impact on Implementations
None.
Impact on Applications / Users
None.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]] yyy.
243-O: Status as MPI_Status Fortran derived typeSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: 9 yes, 0 no, 5 abstain.
Description
-Major decisions in this ticket:*
-Details:*
The existing status(MPI_STATUS_SIZE) array fulfils already the new requirements
But the existing status(MPI_STATUS_SIZE) array programing interface is awkward. Therefore, it is substituted by a TYPE(MPI_Status) derived type.
Extended Scope
None.
History
Since Fortran 90/95, Fortran's derived types are the way to express stuctures similar to C struct. The C interface MPI_Status is defined with a C struct.
Proposed Solution
-MPI-2.2, Section 3.2.5 Return Status, page 32, lines 9-13 read*
In Fortran, status is an array of INTEGERs of size MPI_STATUS_SIZE. The constants MPI_SOURCE, MPI_TAG and MPI_ERROR are the indices of the entries that store the source, tag and error fields. Thus, status(MPI_SOURCE), status(MPI_TAG) and status(MPI_ERROR) contain, respectively, the source, tag and error code of the received message.
-but should read*
In Fortran with
USE mpi
orINCLUDE 'mpif.h'
, status is an array of INTEGERs of size MPI_STATUS_SIZE. The constants MPI_SOURCE, MPI_TAG and MPI_ERROR are the indices of the entries that store the source, tag and error fields. Thus, status(MPI_SOURCE), status(MPI_TAG) and status(MPI_ERROR) contain, respectively, the source, tag and error code of the received message.With Fortran
USE mpi_f08
, status is defined as the Fortran derived type TYPE(MPI_Status), which contains three fields named MPI_SOURCE, MPI_TAG, and MPI_ERROR; the derived type may contain additional fields. Thus, status%MPI_SOURCE, status%MPI_TAG and status%MPI_ERROR contain the source, tag, and error code, respectively, of the received message. Additionally, within thempi
and thempi_f08
module, both, the constants MPI_STATUS_SIZE, MPI_SOURCE, MPI_TAG, MPI_ERROR, and the TYPE(MPI_Status) is defined to allow with both modules the conversion between both status declarations.-MPI-2.2, Section 16.3.5 Status, page 502, lines 2-3 read*
The following two procedures are provided in C to convert from a Fortran status (which is an array of integers) to a C status (which is a structure), and vice versa.
-but should read*
The following two procedures are provided in C to convert from a Fortran (with the
mpi
module ormpif.h
)__ status (which is an array of integers) to a C status (which is a structure), and vice versa.-At the end of MPI-2.2, Section 16.3.5 Status, page 502, lines 2-38,* [[BR]] the following paragraph should be added: [[BR]] (for better readability of this ticket, the following new text is not underlined although it should):
Using the
mpi_f08
Fortran module, a status is declared as TYPE(MPI_Status). The C datatype MPI_F_Status can be used to hand over a Fortran TYPE(MPI_Status) argument into a C routine.int MPI_Status_f082c(MPI_F_Status *f08_status, MPI_Status *c_status)
This C routine converts a Fortran
mpi_f08
status into a C status.int MPI_Status_c2f08(MPI_Status *c_status, MPI_F_Status *f08_status)
This C routine converts a C status into a Fortran
mpi_f08
status.-MPI-2.2, Appendix A.1.2 Types, page 524, after lines 2-44*
The following are defined C type definitions, included in the file mpi.h.
-the following paragraph should be added:*
The following are defined Fortran type definitions, included in the
mpi_f08
module.Impact on Implementations
None.
Impact on Applications / Users
None.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]] yyy.
244-P: MPI_STATUS(ES)_IGNORE and MPI_ERRCODES_IGNORE through function overloadingSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: 5 yes, 2 no, 5 abstain. [[BR]]Comment: Not a big win.
Description
-Major decisions in this ticket:*
MPI_STATUS(ES)_IGNORE
withinUSE mpi_f08
by having thestatus
andarray_of_statuses
OUT arguments as optional through function overloading.MPI_ARGV(S)_NULL
andMPI_ERRCODES_IGNORE
withinUSE mpi_f08
by having theargv
,array_of_argv
IN andarray_of_errcodes
OUT arguments as optional through function overloading in MPI_COMM_SPAWN and MPI_COMM_SPAWN_MULTIPLE.MPI_UNWEIGHTED
withinUSE mpi_f08
by having thesourcweights
,destweights
, andweights
OUT arguments as optional through function overloading in MPI_DIST_GRAPH_CREATE_ADJACENT, MPI_DIST_GRAPH_CREATE, MPI_DIST_GRAPH_NEIGHBORS.-Details:*
Using function overloading for
status
and "OPTIONAL" forierror
allows, that the user can call such routines without using keyword arguments, i.e., all four calls are availableIt is natural to implement optional arguments with the methods available in modern languages instead of using work-arounds that are not part of the language. The existing special address constants
MPI_STATUS_IGNORE
,MPI_STATUSES_IGNORE
,MPI_ARGV_NULL
,MPI_ARGVS_NULL
,MPI_ERRCODES_IGNORE
, andMPI_UNWEIGHTED
are not part of the Fortran language. They must be viewed as a work-around outside of the language.While "OPTIONAL" requires a branch at runtime, with function overloading, the branch can be implemented at compile time. On the other hand, function overloading doubles the number of routines. Because
ierror
is an argument in all but two (Wtime+Wtick) routines,status
andarray_of_statuses
show up as OUT argument in only 33 routines, andarray_of_errcodes
only in two routines.MPI_STATUS_IGNORE
,MPI_STATUSES_IGNORE
,MPI_ARGV_NULL
,MPI_ARGVS_NULL
,MPI_ERRCODES_IGNORE
, andMPI_UNWEIGHTED
are the only sixMPI_..._IGNORE
special constants. Therefore, it makes sense to implement the function overloading for all three or none.Extended Scope
None.
History
All
MPI_..._IGNORE
special constants were introduced in MPI-2.0, i.e., applications written in pure MPI-1.1 are nut affected.Proposed Solution - Part 1
-MPI-2.2, Section 2.3 Procedure Specification, page 10, line 45 reads*
Similarly, MPI_STATUS_IGNORE can be passed as the OUT status argument.
-but should read*
Similarly, MPI_STATUS_IGNORE can be passed as the OUT status argument (with
mpi.h
, thempi
module ormpif.h
).-MPI-2.2, Section 2.5.2 Array Arguments, page 14, lines 19-21 read*
The same approach is followed for other array arguments. In some cases NULL handles are considered valid entries. When a NULL argument is desired for an array of statuses, one uses MPI_STATUSES_IGNORE.
-but should read*
The same approach is followed for other array arguments. In some cases NULL handles are considered valid entries. When a NULL argument is desired for an array of statuses, one uses MPI_STATUSES_IGNORE. With the
mpi_f08
module, optional arguments through function overloading is used instead of [[BR]] MPI_STATUS_IGNORE, MPI_STATUSES_IGNORE, (if #244-P Part 1 is accepted) [[BR]] MPI_ARGV_NULL, MPI_ARGVS_NULL, MPI_ERRCODES_IGNORE, (#244-P Part 2) [[BR]] and MPI_UNWEIGHTED. (#244-P Part 3) [[BR]] -(Without #244-P Part 2 and/or Part 3:)* [[BR]] The constants MPI_ARGV_NULL, MPI_ARGVS_NULL, MPI_ERRCODES_IGNORE, (without Part 2) [[BR]] and MPI_UNWEIGHTED (without Part 3) [[BR]] are not substituted by function overloading.-_MPI-2.2, Section 3.2.6 Passing MPI_STATUSIGNORE for Status, page 34, lines 3-5 read*
To cope with this problem, there are two predefined constants, MPI_STATUS_IGNORE and MPI_STATUSES_IGNORE, which when passed to a receive, wait, or test function, inform the implementation that the status fields are not to be filled in. Note that
-but should read*
To cope with this problem, there are two predefined constants, MPI_STATUS_IGNORE and MPI_STATUSES_IGNORE with the C language bindings and the Fortran bindings through the
mpi
module and thempif.h
include file, which when passed to a receive, wait, or test function, inform the implementation that the status fields are not to be filled in. Note that-_MPI-2.2, Section 3.2.6 Passing MPI_STATUSIGNORE for Status, page 34, lines 28-35 read*
There are no C++ bindings for MPI_STATUS_IGNORE or MPI_STATUSES_IGNORE. To allow an OUT or INOUT MPI::Status argument to be ignored, all MPI C++ bindings that have OUT or INOUT MPI::Status parameters are overloaded with a second version that omits the OUT or INOUT MPI::Status parameter.
Example 3.1 The C++ bindings for MPI_PROBE are:
void MPI::Comm::Probe(int source, int tag, MPI::Status& status) const
[[BR]]void MPI::Comm::Probe(int source, int tag) const
-but should read*
There are no C++ bindings for MPI_STATUS_IGNORE or MPI_STATUSES_IGNORE.With the Fortran bindings through thempi_f08
module and the C++ bindings, MPI_STATUS_IGNORE or MPI_STATUSES_IGNORE does not exist.__ To allow an OUT or INOUT TYPE(MPI_Status) or MPI::Status argument to be ignored, all MPImpi_f08
and C++ bindings that have OUT or INOUT TYPE(MPI_Status) or MPI::Status parameters are overloaded with a second version that omits the OUT or INOUT TYPE(MPI_Status) or MPI::Status parameter.Example 3.1 The
mpi_f08
bindings for MPI_PROBE are: SUBROUTINE MPI_Probe(source, tag, comm, status, ierror) [[BR]] INTEGER, INTENT(IN)::
source, tag [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] SUBROUTINE MPI_Probe(source, tag, comm, ierror) [[BR]] INTEGER, INTENT(IN)::
source, tag [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]]Example 3.
12 The C++ bindings for MPI_PROBE are:void MPI::Comm::Probe(int source, int tag, MPI::Status& status) const
[[BR]]void MPI::Comm::Probe(int source, int tag) const
-_MPI-2.2, Section 3.2.6 Passing MPI_STATUSIGNORE for Status, page 312, lines 21-28 read*
In C or Fortran, an application may pass MPI_ERRCODES_IGNORE if it is not interested in the error codes. In C++ this constant does not exist, and the array_of_errcodes argument may be omitted from the argument list.
-but should read*
In C or in the Fortran
mpi
module ormpif.h
include file, an application may pass MPI_ERRCODES_IGNORE if it is not interested in the error codes. In the Fortranmpi_f08
module or in C++ this constant does not exist, and the array_of_errcodes argument may be omitted from the argument list.-MPI-2.2, Section 12.2 Generalized Requests, page 375, lines 16-21 read*
In both cases, the callback is passed a reference to the corresponding status variable passed by the user to the MPI call; the status set by the callback function is returned by the MPI call. If the user provided MPI_STATUS_IGNORE or MPI_STATUSES_IGNORE to the MPI function that causes query_fn to be called, then MPI will pass a valid status object to query_fn, and this status will be ignored upon return of the callback function.
-but should read*
In both cases, the callback is passed a reference to the corresponding status variable passed by the user to the MPI call; the status set by the callback function is returned by the MPI call. If the user provided MPI_STATUS_IGNORE or MPI_STATUSES_IGNORE to the MPI function that causes query_fn to be called__ or has omitted the status argument (with the
mpi_f08
Fortran module or C++)__, then MPI will pass a valid status object to query_fn, and this status will be ignored upon return of the callback function.-MPI-2.2, Section 12.2 Generalized Requests, page 376, lines 44-45 read*
However, if the MPI function was passed MPI_STATUSES_IGNORE, then the individual error codes returned by each callback functions will be lost.
-but should read*
However, if the MPI function was passed MPI_STATUSES_IGNORE or the status argument was omitted, then the individual error codes returned by each callback functions will be lost.
'''MPI-2.2, Section 13.4.1 Data Access Routines, Subsection Data Access Conventions, page 406, lines 44-46 read'''
The user can pass (in C and Fortran) MPI_STATUS_IGNORE in the status argument if the return value of this argument is not needed. In C++, the status argument is optional.
-but should read*
The user can pass (in C and with the Fortran
mpi
module ormpif.h
include file) MPI_STATUS_IGNORE in the status argument if the return value of this argument is not needed. With the Fortranmpi_f08
module or inInC++, the status argument is optional.-MPI-2.2, Section 16.2.2 Problems With Fortran Bindings for MPI, page 481, lines 26-30 read*
-and need no modifications within this ticket.*
-MPI-2.2, Section 16.2.4 Extended Fortran Support, page 489, lines 1-3 read*
Moreover, “constants” such as MPI_BOTTOM and MPI_STATUS_IGNORE are not constants as defined by Fortran, but “special addresses” used in a nonstandard way.
-and need no modifications within this ticket.*
-MPI-2.2, Section 16.3.5 Status, page 502, needs no modifications within this ticket.*
-MPI-2.2, Section 16.3.9 Constants, page 510, lines 7-10 read*
Also constant "addresses," i.e., special values for reference arguments that are not handles, such as MPI_BOTTOM or MPI_STATUS_IGNORE may have different values in different languages.
-and need no modifications within this ticket.*
'''MPI-2.2, Appendix A.1.1 Defined Constants, Table "Constants Specifying Empty or Ignored Input", page 523, lines 22-36, left column reads'''
-but should read*
MPI-2.2, Section A.4.18 Inter-language Operability, page 591, line 43 - page 592, line 6 reads
Since there are no C++ MPI::STATUS_IGNORE and MPI::STATUSES_IGNORE objects, the result of promoting the C or Fortran handles (MPI_STATUS_IGNORE and MPI_STATUSES_IGNORE) to C++ is undefined.
and need no modifications within this ticket.
Proposed Solution - Part 2
-To be done.*
Proposed Solution - Part 3
-To be done.*
Impact on Implementations
None.
Impact on Applications / Users
None.
(With the alternative Solution: All existing usage of MPI_STATUS_IGNORE must be substituted by using the optinal call syntax.)
Alternative Solutions
-Major decision in this alternatve solution:*
MPI_STATUS(ES)_IGNORE
through function overloading in the new Fortran 2008 binding,MPI_STATUS(ES)_IGNORE
inmpi_f08
.-Details:*
In new Section 16.2.5 "Fortran Support through Module mpi_f08" added by Ticket #230-B, one must add before the list item about IERROR:
status
andarray_of_statuses
output arguments are declared asoptional
(only with #244-P Alternative Solution).Same for Ticket #247-S.
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]] yyy.
245-Q: MPI_ALLOC_MEM and FortranSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
None.
Description
-Major decisions in this ticket:*
-Details:*
-To be done.*
Extended Scope
None.
History
Proposed Solution
MPI-2.2, Section 8.2 Memory Allocation
-TODO*: Declaration of MPI_ALLOC_MEM, MPI_FREE_MEM
-TODO*: MPI-2.2, Section 8.2 memory Allocation, Example 8.1 on page 275, lines 42 - 2 on next page:
and text MPI-2.2, Section 8.2 memory Allocation, page 276, lines 4-6,
and MPI-2.2, Section 11.4.3 Lock, page 358, lines 23-28:
A new version with Fortran C-binding pointer must be added. As far as I know, this does not change the interface, i.e., the new example should be valid for all three, include file mpif.h and modules mpi and mpi_f08.
-To be done.*
Impact on Implementations
None.
Impact on Applications / Users
Users of MPI_ALLOC_MEM may (but need not) switch from Cray-pointers to C-Pointers.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]] yyy.
246-R: Upper and lower case letters in new Fortran bindingsSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Description
-Major decisions in this ticket:*
mpi_f08
interface description usesMPI_Xxxxx
for MPI routines and MPI Fortran (dervied) types, e.g., MPI_Comm-Example:*
Extended Scope
None.
History
MPI-2.2 does not explain the usage of lower and uppercase names, not for C, nor for Fortran. Therefore, wording need not to be changed.
Proposed Solution
Use the rules in the description for the language bindings shown in Ticket #247-S.
Impact on Implementations
None, because Fortran is case insensitive.
Impact on Applications / Users
None, because Fortran is case insensitive. Additionally, all constant handles, including the C datatype handles used for Fortran types are all in upper case, therefore no changes.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]] yyy.
247-S: All new Fortran 2008 bindings - Part 1See Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Description
-Major decisions in this ticket:*
mpi_f08
.-Details:*
This ticket provides the rule for converting existing Fortran interfaces into new Fortran 2008 interfaces.
Extended Scope
None.
History
Proposed Solution
-MPI-2.2, Section 5.9.5 User-Defined Reduction Operations, page 172, lines 9-12 read*
The Fortran declaration of the user-defined function appears below.
-but should read*
When using
mpif.h
or thempi
module, theTheFortran declaration of the user-defined function is:__When using the
mpi_f08
module, the declaration is:__
-CAUTION:* If Ticket #234-F does not pass, then the new
TYPE(*)
line above must be substituted by [[BR]]<type> invec(len), inoutvec(len)
-CAUTION:* If Ticket #231-C does not pass, then the new
INTEGER
and the newTYPE(MPI_Datatype)
line above must be substituted by [[BR]]INTEGER :: len, type
-MPI-2.2, Section 5.9.5 User-Defined Reduction Operations, page 173, lines 8-13 read*
The Fortran version of MPI_REDUCE will invoke a user-defined reduce function using the Fortran calling conventions and will pass a Fortran-type datatype argument; the C version will use C calling convention and the C representation of a datatype handle. Users who plan to mix languages should define their reduction functions accordingly. [[BR]](End of advice to users.)
-but should read*
The Fortran version of MPI_REDUCE will invoke a user-defined reduce function using the Fortran calling conventions and will pass a Fortran-type datatype argument; the C version will use C calling convention and the C representation of a datatype handle. If a Fortran user-defined reduce function is used, then the calling sequence further depends on whether
MPI_OP_CREATE
was invoked via thempif.h
orUSE mpi
interface, or theUSE mpi_f08
interface. Users who plan to mix languages should define their reduction functions accordingly. [[BR]](End of advice to users.)-MPI-2.2, Section 6.7 Caching*
-TODO*: Declaration of MPI_COMM_CREATE_KEYVAL, COMM_COPY_ATTR_FN, COMM_DELETE_ATTR_FN, MPI_WIN_CREATE_KEYVAL, WIN_COPY_ATTR_FN, WIN_DELETE_ATTR_FN, MPI_TYPE_CREATE_KEYVAL, TYPE_COPY_ATTR_FN, TYPE_DELETE_ATTR_FN
-MPI-2.2, Section 8.3 Error Handling*
-TODO*: Declaration of MPI_XXX_CREATE_ERRHANDLER with XXX = COMM, WIN, or FILE, and XXX_ERRHANDLER_FUNCTION
-MPI-2.2, Section 12.2 Generalized Requests*
-TODO*: Declaration of MPI_GREQUEST_START, GREQUEST_QUERY_FUNCTION, GREQUEST_FREE_FUNCTION, GREQUEST_CANCEL_FUNCTION,
-MPI-2.2, Appendix A.1.3 Prototype definitions, page 525-528*
-TODO*: Must be done also by hand
-MPI-2.2, Appendix A.3, Fortran Bindings, page 547, line 1 reads*
A.3 Fortran Bindings
-but should read*
A.3 Fortran Bindings with mpif.h or the mpi module
-After MPI-2.2, Appendix A.3, Fortran Bindings, i.e., after page 570, add new Section*
A.4 Fortran 2008 Bindings with the mpi_f08 module
-It contains same as MPI-2.2, A.3, but*
<type> xxx(*)}}, yyy(*)} --> {{{TYPE(*), DIMENSION(..) :: xxx, yyy
<type> xxx(*)
-->TYPE(*), DIMENSION(..) :: xxx
<type> X
-->TYPE(*) :: x
INTEGER FILE
-->TYPE(MPI_File) :: file
INTEGER FH
-->TYPE(MPI_File) :: fh
INTEGER DATATYPE
-->TYPE(MPI_Datatype) :: datatype
INTEGER SENDTYPE
-->TYPE(MPI_Datatype) :: sendtype
INTEGER RECVTYPE
-->TYPE(MPI_Datatype) :: recvtype
INTEGER SENDTYPES(*)
-->TYPE(MPI_Datatype) :: sendtypes(*)
INTEGER RECVTYPES(*)
-->TYPE(MPI_Datatype) :: recvtypes(*)
INTEGER OLDTYPE
-->TYPE(MPI_Datatype) :: oldtype
INTEGER ARRAY_OF_TYPES(*)
-->TYPE(MPI_Datatype) :: array_of_types(*)
INTEGER TYPE
-->TYPE(MPI_Datatype) :: type
INTEGER ARRAY_OF_DATATYPES(*)
-->TYPE(MPI_Datatype) :: array_of_datatypes(*)
INTEGER NEWTYPE
-->TYPE(MPI_Datatype) :: newtype
INTEGER ORIGIN_DATATYPE
-->TYPE(MPI_Datatype) :: origin_datatype
INTEGER TARGET_DATATYPE
-->TYPE(MPI_Datatype) :: target_datatype
INTEGER ETYPE
-->TYPE(MPI_Datatype) :: etype
INTEGER FILETYPE
-->TYPE(MPI_Datatype) :: filetype
INTEGER OP
-->TYPE(MPI_Op) :: op
INTEGER OLDCOMM
-->TYPE(MPI_Comm) :: oldcomm
INTEGER INFO
-->TYPE(MPI_Info) :: info
INTEGER NEWINFO
-->TYPE(MPI_Info) :: newinfo
INTEGER INFO_USED
-->TYPE(MPI_Info) :: info_used
INTEGER ARRAY_OF_INFO(*)
-->TYPE(MPI_Info) :: array_of_info(*)
INTEGER COMM
-->TYPE(MPI_Comm) :: comm
INTEGER COMM1
-->TYPE(MPI_Comm) :: comm1
INTEGER COMM2
-->TYPE(MPI_Comm) :: comm2
INTEGER LOCAL_COMM
-->TYPE(MPI_Comm) :: local_comm
INTEGER PEER_COMM
-->TYPE(MPI_Comm) :: peer_comm
INTEGER NEWINTERCOMM
-->TYPE(MPI_Comm) :: newintercomm
INTEGER INTERCOMM
-->TYPE(MPI_Comm) :: intercomm
INTEGER INTRACOMM
-->TYPE(MPI_Comm) :: intracomm
INTEGER COMM_OLD
-->TYPE(MPI_Comm) :: comm_old
INTEGER COMM_CART
-->TYPE(MPI_Comm) :: comm_cart
INTEGER COMM_DIST_GRAPH
-->TYPE(MPI_Comm) :: comm_dist_graph
INTEGER COMM_GRAPH
-->TYPE(MPI_Comm) :: comm_graph
INTEGER PARENT
-->TYPE(MPI_Comm) :: parent
INTEGER GROUP
-->TYPE(MPI_Group) :: group
INTEGER GROUP1
-->TYPE(MPI_Group) :: group1
INTEGER GROUP2
-->TYPE(MPI_Group) :: group2
INTEGER NEWGROUP
-->TYPE(MPI_Group) :: newgroup
INTEGER NEWCOMM
-->TYPE(MPI_Comm) :: newcomm
INTEGER REQUEST
-->TYPE(MPI_Request) :: request
INTEGER ARRAY_OF_REQUESTS(*)
-->TYPE(MPI_Request) :: array_of_requests(*)
INTEGER WIN
-->TYPE(MPI_Win) :: win
INTEGER ERRHANDLER
-->TYPE(MPI_Errhandler) :: errhandler
The total list of all new Fortran bindings is shown in Ticket #248-T
Impact on Implementations
Impact on Applications / Users
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]] yyy.
248-T: All new Fortran 2008 bindings - Part 2See Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Description
-Major decisions in this ticket:*
Extended Scope
None.
History
Proposed Solution
A.4 Fortran 2008 Bindings with module mpi_f08
A.4.1 Point-to-Point Communication Fortran Bindings
-_SUBROUTINE MPIBsend(buf, count, datatype, dest, tag, comm, ierror)* [[BR]] TYPE(_), DIMENSION(..)
::
buf [[BR]] INTEGER, INTENT(IN)::
count, dest, tag [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Bsend_init(buf, count, datatype, dest, tag, comm, request, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count, dest, tag [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Request), INTENT(OUT)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Buffer_attach(buffer, size, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
buffer [[BR]] INTEGER, INTENT(IN)::
size [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Buffer_detach(buffer_addr, size, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
buffer_addr [[BR]] INTEGER, INTENT(OUT)::
size [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Cancel(request, ierror)* [[BR]] TYPE(MPI_Request), INTENT(IN)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Get_count(status, datatype, count, ierror) [[BR]] TYPE(MPI_Status), INTENT(IN)::
status [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER, INTENT(OUT)::
count [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Ibsend(buf, count, datatype, dest, tag, comm, request, ierror) [[BR]] TYPE(_), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count, dest, tag [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Request), INTENT(OUT)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Iprobe(source, tag, comm, flag, status, ierror)* [[BR]] INTEGER, INTENT(IN)::
source, tag [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Irecv(buf, count, datatype, source, tag, comm, request, ierror) [[BR]] TYPE(_), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count, source, tag [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Request), INTENT(OUT)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Irsend(buf, count, datatype, dest, tag, comm, request, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count, dest, tag [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Request), INTENT(OUT)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Isend(buf, count, datatype, dest, tag, comm, request, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count, dest, tag [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Request), INTENT(OUT)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Issend(buf, count, datatype, dest, tag, comm, request, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count, dest, tag [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Request), INTENT(OUT)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Probe(source, tag, comm, status, ierror)* [[BR]] INTEGER, INTENT(IN)::
source, tag [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Recv(buf, count, datatype, source, tag, comm, status, ierror) [[BR]] TYPE(_), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count, source, tag [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Recv_init(buf, count, datatype, source, tag, comm, request, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count, source, tag [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Request), INTENT(OUT)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Request_free(request, ierror)* [[BR]] TYPE(MPI_Request), INTENT(INOUT)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Request_get_status( request, flag, status, ierror) [[BR]] TYPE(MPI_Request), INTENT(IN)::
request [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Rsend(buf, count, datatype, dest, tag, comm, ierror) [[BR]] TYPE(_), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count, dest, tag [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Rsend_init(buf, count, datatype, dest, tag, comm, request, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count, dest, tag [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Request), INTENT(OUT)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Send(buf, count, datatype, dest, tag, comm, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count, dest, tag [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Sendrecv(sendbuf, sendcount, sendtype, dest, sendtag, recvbuf, recvcount, recvtype, source, recvtag, comm, status, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
sendbuf, recvbuf [[BR]] INTEGER, INTENT(IN)::
sendcount, dest, sendtag, recvcount, source, recvtag [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
sendtype, recvtype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Sendrecv_replace(buf, count, datatype, dest, sendtag, source, recvtag, comm, status, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count, dest, sendtag, source, recvtag [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Send_init(buf, count, datatype, dest, tag, comm, request, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count, dest, tag [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Request), INTENT(OUT)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Ssend(buf, count, datatype, dest, tag, comm, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count, dest, tag [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Ssend_init(buf, count, datatype, dest, tag, comm, request, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count, dest, tag [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Request), INTENT(OUT)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Start(request, ierror)* [[BR]] TYPE(MPI_Request), INTENT(INOUT)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Startall(count, array_of_requests, ierror) [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Request), INTENT(INOUT)::
array_ofrequests() [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Test(request, flag, status, ierror)* [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] TYPE(MPI_Request), INTENT(INOUT)::
request [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Testall(count, array_of_requests, flag, array_of_statuses, ierror) [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Request), INTENT(INOUT)::
array_ofrequests() [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] TYPE(MPI_Status), INTENT(OUT)::
array_ofstatuses() ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Testany(count, array_of_requests, index, flag, status, ierror) [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Request), INTENT(INOUT)::
array_ofrequests() [[BR]] INTEGER, INTENT(OUT)::
index [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Testsome(incount, array_of_requests, outcount, array_of_indices, array_of_statuses, ierror)* [[BR]] INTEGER, INTENT(IN)::
incount [[BR]] TYPE(MPI_Request), INTENT(INOUT)::
array_ofrequests() [[BR]] INTEGER, INTENT(OUT)::
outcount, array_ofindices() [[BR]] TYPE(MPI_Status), INTENT(OUT)::
array_ofstatuses() ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Test_cancelled(status, flag, ierror)* [[BR]] TYPE(MPI_Status), INTENT(IN)::
status [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Wait(request, status, ierror) [[BR]] TYPE(MPI_Request), INTENT(INOUT)::
request [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Waitall(count, array_of_requests, array_of_statuses, ierror) [[BR]] INTEGER, INTENT(IN)::
count [[BR]] INTEGER, INTENT(INOUT)::
array_ofrequests() [[BR]] TYPE(MPI_Status), INTENT(OUT)::
array_ofstatuses() ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Waitany(count, array_of_requests, index, status, ierror) [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Request), INTENT(INOUT)::
array_ofrequests() [[BR]] INTEGER, INTENT(OUT)::
index [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Waitsome(incount, array_of_requests, outcount, array_of_indices, array_of_statuses, ierror)* [[BR]] INTEGER, INTENT(IN)::
incount [[BR]] TYPE(MPI_Request), INTENT(INOUT)::
array_ofrequests() [[BR]] INTEGER, INTENT(OUT)::
outcount, array_ofindices() [[BR]] TYPE(MPI_Status), INTENT(OUT)::
array_ofstatuses() ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Get_address(location, address, ierror)* [[BR]] TYPE(*), DIMENSION(..)::
location [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(OUT)::
address [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]]A.4.2 Datatypes Fortran Bindings
-_SUBROUTINE MPI_Getelements(status, datatype, count, ierror)* [[BR]] TYPE(MPI_Status), INTENT(IN)
::
status [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER, INTENT(OUT)::
count [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Pack(inbuf, incount, datatype, outbuf, outsize, position, comm, ierror) [[BR]] TYPE(_), DIMENSION(..)::
inbuf, outbuf [[BR]] INTEGER, INTENT(IN)::
incount, outsize [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER, INTENT(INOUT)::
position [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Pack_external(datarep, inbuf, incount, datatype, outbuf, outsize, position, ierror)* [[BR]] CHARACTER(LEN=), INTENT(IN)::
datarep [[BR]] TYPE(), DIMENSION(..)::
inbuf, outbuf [[BR]] INTEGER, INTENT(IN)::
incount [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
outsize [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(INOUT)::
position [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Pack_external_size(datarep, incount, datatype, size, ierror) [[BR]] TYPE(MPIDatatype), INTENT(IN)::
datatype [[BR]] INTEGER, INTENT(IN)::
incount [[BR]] CHARACTER(LEN=), INTENT(IN)::
datarep [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(OUT)::
size [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Pack_size(incount, datatype, comm, size, ierror)* [[BR]] INTEGER, INTENT(IN)::
incount [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(OUT)::
size [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_commit(datatype, ierror) [[BR]] TYPE(MPI_Datatype), INTENT(INOUT)::
datatype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_contiguous(count, oldtype, newtype, ierror) [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
oldtype [[BR]] TYPE(MPI_Datatype), INTENT(OUT)::
newtype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_create_darray(size, rank, ndims, array_of_gsizes, array_of_distribs, array_of_dargs, array_of_psizes, order, oldtype, newtype, ierror) [[BR]] INTEGER, INTENT(IN)::
size, rank, ndims, array_ofgsizes(), array_ofdistribs(), array_ofdargs(), array_ofpsizes(), order [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
oldtype [[BR]] TYPE(MPI_Datatype), INTENT(OUT)::
newtype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_create_hindexed(count, array_of_blocklengths, array_of_displacements, oldtype, newtype, ierror) [[BR]] INTEGER, INTENT(IN)::
count, array_ofblocklengths() [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
array_ofdisplacements() [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
oldtype [[BR]] TYPE(MPI_Datatype), INTENT(OUT)::
newtype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_create_hvector(count, blocklength, stride, oldtype, newtype, ierror) [[BR]] INTEGER, INTENT(IN)::
count, blocklength [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
stride [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
oldtype [[BR]] TYPE(MPI_Datatype), INTENT(OUT)::
newtype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_create_indexed_block(count, blocklength, array_of_displacements, oldtype, newtype, ierror) [[BR]] INTEGER, INTENT(IN)::
count, blocklength, array_ofdisplacements() [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
oldtype [[BR]] TYPE(MPI_Datatype), INTENT(OUT)::
newtype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Type_create_resized(oldtype, lb, extent, newtype, ierror)* [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
lb, extent [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
oldtype [[BR]] TYPE(MPI_Datatype), INTENT(OUT)::
newtype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_create_struct(count, array_of_blocklengths, array_of_displacements, array_of_types, newtype, ierror) [[BR]] INTEGER, INTENT(IN)::
count, array_ofblocklengths() [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
array_ofdisplacements() [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
array_oftypes() [[BR]] TYPE(MPI_Datatype), INTENT(OUT)::
newtype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Type_create_subarray(ndims, array_of_sizes, array_of_subsizes, array_of_starts, order, oldtype, newtype, ierror)* [[BR]] INTEGER, INTENT(IN)::
ndims, array_ofsizes(), array_ofsubsizes(), array_ofstarts(), order [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
oldtype [[BR]] TYPE(MPI_Datatype), INTENT(OUT)::
newtype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Type_dup(oldtype, newtype, ierror)* [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
newtype [[BR]] TYPE(MPI_Datatype), INTENT(OUT)::
newtype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]](This routine specification was changed by Ticket #252-W). [[BR]] [[BR]] SUBROUTINE MPI_Type_free(datatype, ierror) [[BR]] TYPE(MPI_Datatype), INTENT(INOUT)::
datatype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_get_contents(datatype, max_integers, max_addresses, max_datatypes, array_of_integers, array_of_addresses, array_of_datatypes, ierror) [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER, INTENT(IN)::
max_integers, max_addresses, max_datatypes [[BR]] INTEGER, INTENT(OUT)::
array_ofintegers() [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(OUT)::
array_ofaddresses() [[BR]] TYPE(MPI_Datatype), INTENT(OUT)::
array_ofdatatypes() [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Type_get_envelope(datatype, num_integers, num_addresses, num_datatypes, combiner, ierror)* [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER, INTENT(OUT)::
num_integers, num_addresses, num_datatypes, combiner [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_get_extent(datatype, lb, extent, ierror) [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(OUT)::
lb, extent [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_get_true_extent(datatype, true_lb, true_extent, ierror) [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(OUT)::
true_lb, true_extent [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_indexed(count, array_of_blocklengths, array_of_displacements, oldtype, newtype, ierror) [[BR]] INTEGER, INTENT(IN)::
count, array_ofblocklengths(), array_ofdisplacements() [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
oldtype [[BR]] TYPE(MPI_Datatype), INTENT(OUT)::
newtype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_size(datatype, size, ierror) [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER, INTENT(OUT)::
size [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_vector(count, blocklength, stride, oldtype, newtype, ierror) [[BR]] INTEGER, INTENT(IN)::
count, blocklength, stride [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
oldtype [[BR]] TYPE(MPI_Datatype), INTENT(OUT)::
newtype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Unpack(inbuf, insize, position, outbuf, outcount, datatype, comm, ierror) [[BR]] TYPE(_), DIMENSION(..)::
inbuf, outbuf [[BR]] INTEGER, INTENT(IN)::
insize, outcount [[BR]] INTEGER, INTENT(INOUT)::
position [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Unpack_external(datarep, inbuf, insize, position, outbuf, outcount, datatype, ierror)* [[BR]] CHARACTER(LEN=), INTENT(IN)::
datarep [[BR]] TYPE(), DIMENSION(..)::
inbuf, outbuf [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
insize [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(INOUT)::
position [[BR]] INTEGER, INTENT(IN)::
outcount [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]]A.4.3 Collective Communication Fortran Bindings
-_SUBROUTINE MPIAllgather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, ierror)* [[BR]] TYPE(_), DIMENSION(..)
::
sendbuf, recvbuf [[BR]] INTEGER, INTENT(IN)::
sendcount, recvcount [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
sendtype [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
recvtype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Allgatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, comm, ierror)* [[BR]] TYPE(), DIMENSION(..)::
sendbuf, recvbuf [[BR]] INTEGER, INTENT(IN)::
sendcount, recvcounts(), displs(_) [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
sendtype [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
recvtype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Allreduce(sendbuf, recvbuf, count, datatype, op, comm, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
sendbuf, recvbuf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Op), INTENT(IN)::
op [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Alltoall(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, comm, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
sendbuf, recvbuf [[BR]] INTEGER, INTENT(IN)::
sendcount, recvcount [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
sendtype [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
recvtype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Alltoallv(sendbuf, sendcounts, sdispls, sendtype, recvbuf, recvcounts, rdispls, recvtype, comm, ierror)* [[BR]] TYPE(), DIMENSION(..)::
sendbuf, recvbuf [[BR]] INTEGER, INTENT(IN)::
sendcounts(), sdispls(), recvcounts(), rdispls(_) [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
sendtype [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
recvtype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Alltoallw(sendbuf, sendcounts, sdispls, sendtypes, recvbuf, recvcounts, rdispls, recvtypes, comm, ierror)* [[BR]] TYPE(), DIMENSION(..)::
sendbuf, recvbuf [[BR]] INTEGER, INTENT(IN)::
sendcounts(), sdispls(), recvcounts(), rdispls(_) [[BR]] TYPE(MPIDatatype), INTENT(IN)::
sendtypes() [[BR]] TYPE(MPIDatatype), INTENT(IN)::
recvtypes() [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Barrier(comm, ierror)* [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Bcast(buffer, count, datatype, root, comm, ierror) [[BR]] TYPE(_), DIMENSION(..)::
buffer [[BR]] INTEGER, INTENT(IN)::
count, root [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Exscan(sendbuf, recvbuf, count, datatype, op, comm, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
sendbuf, recvbuf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Op), INTENT(IN)::
op [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Gather(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
sendbuf, recvbuf [[BR]] INTEGER, INTENT(IN)::
sendcount, recvcount, root [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
sendtype [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
recvtype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Gatherv(sendbuf, sendcount, sendtype, recvbuf, recvcounts, displs, recvtype, root, comm, ierror)* [[BR]] TYPE(), DIMENSION(..)::
sendbuf, recvbuf [[BR]] INTEGER, INTENT(IN)::
sendcount, recvcounts(), displs(_), root [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
sendtype [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
recvtype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Op_commutative(op, commute, ierror)* [[BR]] TYPE(MPI_Op), INTENT(IN)::
op [[BR]] LOGICAL, INTENT(OUT)::
commute [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Op_create(user_fn, commute, op, ierror) [[BR]] EXTERNAL::
user_fn [[BR]] LOGICAL, INTENT(IN)::
commute [[BR]] TYPE(MPI_Op), INTENT(OUT)::
op [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]](This routine specification was changed by Ticket #252-W). [[BR]] [[BR]] SUBROUTINE MPI_Op_free(op, ierror) [[BR]] TYPE(MPI_Op), INTENT(INOUT)::
op [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Reduce(sendbuf, recvbuf, count, datatype, op, root, comm, ierror) [[BR]] TYPE(_), DIMENSION(..)::
sendbuf, recvbuf [[BR]] INTEGER, INTENT(IN)::
count, root [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Op), INTENT(IN)::
op [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Reduce_local(inbuf, inoutbuf, count, datatype, op, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
inbuf, inoutbuf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Op), INTENT(IN)::
op [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Reduce_scatter(sendbuf, recvbuf, recvcounts, datatype, op, comm, ierror)* [[BR]] TYPE(), DIMENSION(..)::
sendbuf, recvbuf [[BR]] INTEGER, INTENT(IN)::
recvcounts() [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Op), INTENT(IN)::
op [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Reduce_scatter_block(sendbuf, recvbuf, recvcount, datatype, op, comm, ierror) [[BR]] TYPE(_), DIMENSION(..)::
sendbuf, recvbuf [[BR]] INTEGER, INTENT(IN)::
recvcount [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Op), INTENT(IN)::
op [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Scan(sendbuf, recvbuf, count, datatype, op, comm, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
sendbuf, recvbuf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Op), INTENT(IN)::
op [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Scatter(sendbuf, sendcount, sendtype, recvbuf, recvcount, recvtype, root, comm, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
sendbuf, recvbuf [[BR]] INTEGER, INTENT(IN)::
sendcount, recvcount, root [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
sendtype [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
recvtype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Scatterv(sendbuf, sendcounts, displs, sendtype, recvbuf, recvcount, recvtype, root, comm, ierror)* [[BR]] TYPE(), DIMENSION(..)::
sendbuf, recvbuf [[BR]] INTEGER, INTENT(IN)::
sendcounts(), displs(*), recvcount, root [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
sendtype [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
recvtype [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]]A.4.4 Groups, Contexts, Communicators, and Caching Fortran Bindings
-_SUBROUTINE MPI_Commcompare(comm1, comm2, result, ierror)* [[BR]] TYPE(MPI_Comm), INTENT(IN)
::
comm1 [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm2 [[BR]] INTEGER, INTENT(OUT)::
result [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_create(comm, group, newcomm, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Group), INTENT(IN)::
group [[BR]] TYPE(MPI_Comm), INTENT(OUT)::
newcomm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_create_keyval(comm_copy_attr_fn, comm_delete_attr_fn, comm_keyval, extra_state, ierror) [[BR]] EXTERNAL::
comm_copy_attr_fn, comm_delete_attr_fn [[BR]] INTEGER, INTENT(OUT)::
comm_keyval [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
extra_state [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_delete_attr(comm, comm_keyval, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(IN)::
comm_keyval [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_dup(comm, newcomm, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Comm), INTENT(OUT)::
newcomm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_COMM_DUP_FN(oldcomm, comm_keyval, extra_state, attribute_val_in, attribute_val_out, flag, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
oldcomm [[BR]] INTEGER, INTENT(IN)::
comm_keyval [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
extra_state, attribute_val_in [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(OUT)::
attribute_val_out [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] INTEGER, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_free(comm, ierror) [[BR]] TYPE(MPI_Comm), INTENT(INOUT)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_free_keyval(comm_keyval, ierror) [[BR]] INTEGER, INTENT(INOUT)::
comm_keyval [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_get_attr(comm, comm_keyval, attribute_val, flag, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(IN)::
comm_keyval [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(OUT)::
attribute_val [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_get_name(comm, comm_name, resultlen, ierror) [[BR]] TYPE(MPIComm), INTENT(IN)::
comm [[BR]] CHARACTER(LEN=), INTENT(OUT)::
comm_name [[BR]] INTEGER, INTENT(OUT)::
resultlen [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Comm_group(comm, group, ierror)* [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Group), INTENT(OUT)::
group [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_COMM_NULL_COPY_FN(oldcomm, comm_keyval, extra_state, attribute_val_in, attribute_val_out, flag, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
oldcomm [[BR]] INTEGER, INTENT(IN)::
comm_keyval [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
extra_state, attribute_val_in [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(OUT)::
attribute_val_out [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] INTEGER, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_COMM_NULL_DELETE_FN(comm, comm_keyval, attribute_val, extra_state, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(IN)::
comm_keyval [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
attribute_val, extra_state [[BR]] INTEGER, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_rank(comm, rank, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(OUT)::
rank [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_remote_group(comm, group, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Group), INTENT(OUT)::
group [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_remote_size(comm, size, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(OUT)::
size [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_set_attr(comm, comm_keyval, attribute_val, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(IN)::
comm_keyval [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
attribute_val [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_set_name(comm, comm_name, ierror) [[BR]] TYPE(MPIComm), INTENT(IN)::
comm [[BR]] CHARACTER(LEN=), INTENT(IN)::
comm_name [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Comm_size(comm, size, ierror)* [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(OUT)::
size [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_split(comm, color, key, newcomm, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(IN)::
color, key [[BR]] TYPE(MPI_Comm), INTENT(OUT)::
newcomm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_test_inter(comm, flag, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Group_compare(group1, group2, result, ierror) [[BR]] TYPE(MPI_Group), INTENT(IN)::
group1, group2 [[BR]] INTEGER, INTENT(OUT)::
result [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Group_difference(group1, group2, newgroup, ierror) [[BR]] TYPE(MPI_Group), INTENT(IN)::
group1, group2 [[BR]] TYPE(MPI_Group), INTENT(OUT)::
newgroup [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Group_excl(group, n, ranks, newgroup, ierror) [[BR]] TYPE(MPIGroup), INTENT(IN)::
group [[BR]] INTEGER, INTENT(IN)::
n, ranks() [[BR]] TYPE(MPI_Group), INTENT(OUT)::
newgroup [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Group_free(group, ierror)* [[BR]] TYPE(MPI_Group), INTENT(INOUT)::
group [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Group_incl(group, n, ranks, newgroup, ierror) [[BR]] INTEGER, INTENT(IN)::
n, ranks(_) [[BR]] TYPE(MPI_Group), INTENT(IN)::
group [[BR]] TYPE(MPI_Group), INTENT(OUT)::
newgroup [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Group_intersection(group1, group2, newgroup, ierror)* [[BR]] TYPE(MPI_Group), INTENT(IN)::
group1, group2 [[BR]] TYPE(MPI_Group), INTENT(OUT)::
newgroup [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Group_range_excl(group, n, ranges, newgroup, ierror) [[BR]] TYPE(MPIGroup), INTENT(IN)::
group [[BR]] INTEGER, INTENT(IN)::
n, ranges(3,) [[BR]] TYPE(MPI_Group), INTENT(OUT)::
newgroup [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Group_range_incl(group, n, ranges, newgroup, ierror)* [[BR]] TYPE(MPIGroup), INTENT(IN)::
group [[BR]] INTEGER, INTENT(IN)::
n, ranges(3,) [[BR]] TYPE(MPI_Group), INTENT(OUT)::
newgroup [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Group_rank(group, rank, ierror)* [[BR]] TYPE(MPI_Group), INTENT(IN)::
group [[BR]] INTEGER, INTENT(OUT)::
rank [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Group_size(group, size, ierror) [[BR]] TYPE(MPI_Group), INTENT(IN)::
group [[BR]] INTEGER, INTENT(OUT)::
size [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Group_translate_ranks(group1, n, ranks1, group2, ranks2, ierror) [[BR]] TYPE(MPIGroup), INTENT(IN)::
group1, group2 [[BR]] INTEGER, INTENT(IN)::
n, ranks1() [[BR]] INTEGER, INTENT(OUT)::
ranks2(_) [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Group_union(group1, group2, newgroup, ierror) [[BR]] TYPE(MPI_Group), INTENT(IN)::
group1, group2 [[BR]] TYPE(MPI_Group), INTENT(OUT)::
newgroup [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Intercomm_create(local_comm, local_leader, peer_comm, remote_leader, tag, newintercomm, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
local_comm, peer_comm [[BR]] INTEGER, INTENT(IN)::
local_leader, remote_leader, tag [[BR]] TYPE(MPI_Comm), INTENT(OUT)::
newintercomm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Intercomm_merge(intercomm, high, newintracomm, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
intercomm [[BR]] LOGICAL, INTENT(IN)::
high [[BR]] TYPE(MPI_Comm), INTENT(OUT)::
newintracomm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_create_keyval(type_copy_attr_fn, type_delete_attr_fn, type_keyval, extra_state, ierror) [[BR]] EXTERNAL::
type_copy_attr_fn, type_delete_attr_fn [[BR]] INTEGER, INTENT(OUT)::
type_keyval [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
extra_state [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_delete_attr(datatype, type_keyval, ierror) [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER, INTENT(IN)::
type_keyval [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]](This routine specification was changed by Ticket #252-W). [[BR]] [[BR]] SUBROUTINE MPI_TYPE_DUP_FN(oldtype, type_keyval, extra_state, attribute_val_in, attribute_val_out, flag, ierror) [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
oldtype [[BR]] INTEGER, INTENT(IN)::
type_keyval [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
extra_state, attribute_val_in [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(OUT)::
attribute_val_out [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] INTEGER, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_free_keyval(type_keyval, ierror) [[BR]] INTEGER, INTENT(INOUT)::
type_keyval [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_get_attr(datatype, type_keyval, attribute_val, flag, ierror) [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER, INTENT(IN)::
type_keyval [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(OUT)::
attribute_val [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]](This routine specification was changed by Ticket #252-W). [[BR]] [[BR]] SUBROUTINE MPI_Type_get_name(datatype, type_name, resultlen, ierror) [[BR]] TYPE(MPIDatatype), INTENT(IN)::
datatype [[BR]] CHARACTER(LEN=), INTENT(OUT)::
type_name [[BR]] INTEGER, INTENT(OUT)::
resultlen [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]](This routine specification was changed by Ticket #252-W). [[BR]] [[BR]] _SUBROUTINE MPI_TYPE_NULL_COPY_FN(oldtype, type_keyval, extra_state, attribute_val_in, attribute_val_out, flag, ierror)* [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
oldtype [[BR]] INTEGER, INTENT(IN)::
type_keyval [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
extra_state, attribute_val_in [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(OUT)::
attribute_val_out [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] INTEGER, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_TYPE_NULL_DELETE_FN(datatype, type_keyval, attribute_val, extra_state, ierror) [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER, INTENT(IN)::
type_keyval [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
attribute_val, extra_state [[BR]] INTEGER, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]](This routine specification was changed by Ticket #252-W). [[BR]] [[BR]] SUBROUTINE MPI_Type_set_attr(datatype, type_keyval, attribute_val, ierror) [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER, INTENT(IN)::
type_keyval [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
attribute_val [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]](This routine specification was changed by Ticket #252-W). [[BR]] [[BR]] SUBROUTINE MPI_Type_set_name(datatype, type_name, ierror) [[BR]] TYPE(MPIDatatype), INTENT(IN)::
datatype [[BR]] CHARACTER(LEN=), INTENT(IN)::
type_name [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]](This routine specification was changed by Ticket #252-W). [[BR]] [[BR]] _SUBROUTINE MPI_Win_create_keyval(win_copy_attr_fn, win_delete_attr_fn, win_keyval, extra_state, ierror)* [[BR]] EXTERNAL::
win_copy_attr_fn, win_delete_attr_fn [[BR]] INTEGER, INTENT(OUT)::
win_keyval [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
extra_state [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Win_delete_attr(win, win_keyval, ierror) [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] INTEGER, INTENT(IN)::
win_keyval [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_WIN_DUP_FN(oldwin, win_keyval, extra_state, attribute_val_in, attribute_val_out, flag, ierror) [[BR]] INTEGER, INTENT(IN)::
oldwin, win_keyval [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
extra_state, attribute_val_in [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(OUT)::
attribute_val_out [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] INTEGER, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Win_free_keyval(win_keyval, ierror) [[BR]] INTEGER, INTENT(INOUT)::
win_keyval [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Win_get_attr(win, win_keyval, attribute_val, flag, ierror) [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] INTEGER, INTENT(IN)::
win_keyval [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(OUT)::
attribute_val [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Win_get_name(win, win_name, resultlen, ierror) [[BR]] TYPE(MPIWin), INTENT(IN)::
win [[BR]] CHARACTER(LEN=), INTENT(OUT)::
win_name [[BR]] INTEGER, INTENT(OUT)::
resultlen [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_WIN_NULL_COPY_FN(oldwin, win_keyval, extra_state, attribute_val_in, attribute_val_out, flag, ierror)* [[BR]] INTEGER, INTENT(IN)::
oldwin, win_keyval [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
extra_state, attribute_val_in [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(OUT)::
attribute_val_out [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] INTEGER, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_WIN_NULL_DELETE_FN(win, win_keyval, attribute_val, extra_state, ierror) [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] INTEGER, INTENT(IN)::
win_keyval [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
attribute_val, extra_state [[BR]] INTEGER, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Win_set_attr(win, win_keyval, attribute_val, ierror) [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] INTEGER, INTENT(IN)::
win_keyval [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
attribute_val [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Win_set_name(win, win_name, ierror) [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] CHARACTER(LEN=*), INTENT(IN)::
win_name [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]]A.4.5 Process Topologies Fortran Bindings
-_SUBROUTINE MPI_Cartdimget(comm, ndims, ierror)* [[BR]] TYPE(MPI_Comm), INTENT(IN)
::
comm [[BR]] INTEGER, INTENT(OUT)::
ndims [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Cart_coords(comm, rank, maxdims, coords, ierror) [[BR]] TYPE(MPIComm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(IN)::
rank, maxdims [[BR]] INTEGER, INTENT(OUT)::
coords() [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Cart_create(comm_old, ndims, dims, periods, reorder, comm_cart, ierror)* [[BR]] TYPE(MPI_Comm), INTENT(IN)::
commold [[BR]] INTEGER, INTENT(IN)::
ndims, dims() [[BR]] LOGICAL, INTENT(IN)::
periods(_), reorder [[BR]] TYPE(MPI_Comm), INTENT(OUT)::
comm_cart [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Cart_get(comm, maxdims, dims, periods, coords, ierror) [[BR]] TYPE(MPIComm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(IN)::
maxdims [[BR]] INTEGER, INTENT(OUT)::
dims(), coords() [[BR]] LOGICAL, INTENT(OUT)::
periods() [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Cart_map(comm, ndims, dims, periods, newrank, ierror)* [[BR]] TYPE(MPIComm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(IN)::
ndims, dims() [[BR]] LOGICAL, INTENT(IN)::
periods(_) [[BR]] INTEGER, INTENT(OUT)::
newrank [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Cart_rank(comm, coords, rank, ierror) [[BR]] TYPE(MPIComm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(IN)::
coords() [[BR]] INTEGER, INTENT(OUT)::
rank [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Cart_shift(comm, direction, disp, rank_source, rank_dest, ierror)* [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(IN)::
direction, disp [[BR]] INTEGER, INTENT(OUT)::
rank_source, rank_dest [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Cart_sub(comm, remain_dims, newcomm, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] LOGICAL, INTENT(IN)::
remaindims() [[BR]] TYPE(MPI_Comm), INTENT(OUT)::
newcomm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Dims_create(nnodes, ndims, dims, ierror)* [[BR]] INTEGER, INTENT(IN)::
nnodes, ndims [[BR]] INTEGER, INTENT(INOUT)::
dims(_) [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Dist_graph_create(comm_old, n, sources, degrees, destinations, weights, info, reorder, comm_dist_graph, ierror)* [[BR]] TYPE(MPI_Comm), INTENT(IN)::
commold [[BR]] INTEGER, INTENT(IN)::
n, sources(), degrees(), destinations(), weights(_) [[BR]] TYPE(MPI_Info), INTENT(IN)::
info [[BR]] LOGICAL, INTENT(IN)::
reorder [[BR]] TYPE(MPI_Comm), INTENT(OUT)::
comm_dist_graph [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Dist_graph_create_adjacent(comm_old, indegree, sources, sourceweights, outdegree, destinations, destweights, info, reorder, comm_dist_graph, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
commold [[BR]] INTEGER, INTENT(IN)::
indegree, sources(), sourceweights(), outdegree, destinations(), destweights(_) [[BR]] TYPE(MPI_Info), INTENT(IN)::
info [[BR]] LOGICAL, INTENT(IN)::
reorder [[BR]] TYPE(MPI_Comm), INTENT(OUT)::
comm_dist_graph [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Dist_graph_neighbors(comm, maxindegree, sources, sourceweights, maxoutdegree, destinations, destweights, ierror) [[BR]] TYPE(MPIComm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(IN)::
maxindegree, maxoutdegree [[BR]] INTEGER, INTENT(OUT)::
sources(), destinations() [[BR]] INTEGER::
sourceweights(), destweights(_) [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Dist_graph_neighbors_count(comm, indegree, outdegree, weighted, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(OUT)::
indegree, outdegree [[BR]] LOGICAL, INTENT(OUT)::
weighted [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Graphdims_get(comm, nnodes, nedges, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(OUT)::
nnodes, nedges [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Graph_create(comm_old, nnodes, index, edges, reorder, comm_graph, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
commold [[BR]] INTEGER, INTENT(IN)::
nnodes, index(), edges(_) [[BR]] LOGICAL, INTENT(IN)::
reorder [[BR]] TYPE(MPI_Comm), INTENT(OUT)::
comm_graph [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Graph_get(comm, maxindex, maxedges, index, edges, ierror) [[BR]] TYPE(MPIComm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(IN)::
maxindex, maxedges [[BR]] INTEGER, INTENT(OUT)::
index(), edges(_) [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Graph_map(comm, nnodes, index, edges, newrank, ierror) [[BR]] TYPE(MPIComm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(IN)::
nnodes, index(), edges(_) [[BR]] INTEGER, INTENT(OUT)::
newrank [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Graph_neighbors(comm, rank, maxneighbors, neighbors, ierror) [[BR]] TYPE(MPIComm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(IN)::
rank, maxneighbors [[BR]] INTEGER, INTENT(OUT)::
neighbors() [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Graph_neighbors_count(comm, rank, nneighbors, ierror)* [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(IN)::
rank [[BR]] INTEGER, INTENT(OUT)::
nneighbors [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Topo_test(comm, status, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(OUT)::
status [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]]A.4.6 MPI Environmenta Management Fortran Bindings
DOUBLE PRECISION FUNCTION MPI_Wtick() [[BR]] END FUNCTION [[BR]] [[BR]] DOUBLE PRECISION FUNCTION MPI_Wtime() [[BR]] END FUNCTION [[BR]] [[BR]] SUBROUTINE MPI_Abort(comm, errorcode, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)
::
comm [[BR]] INTEGER, INTENT(IN)::
errorcode [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Add_error_class(errorclass, ierror) [[BR]] INTEGER, INTENT(OUT)::
errorclass [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Add_error_code(errorclass, errorcode, ierror) [[BR]] INTEGER, INTENT(IN)::
errorclass [[BR]] INTEGER, INTENT(OUT)::
errorcode [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Add_error_string(errorcode, string, ierror) [[BR]] INTEGER, INTENT(IN)::
errorcode [[BR]] CHARACTER(LEN=_), INTENT(IN)::
string [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Alloc_mem(size, info, baseptr, ierror)* [[BR]] USE, INTRINSIC::
ISO_C_BINDING [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
size [[BR]] TYPE(MPI_Info), INTENT(IN)::
info [[BR]] TYPE(C_PTR), INTENT(OUT)::
baseptr [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_call_errhandler(comm, errorcode, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] INTEGER, INTENT(IN)::
errorcode [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_create_errhandler(comm_errhandler_fn, errhandler, ierror) [[BR]] EXTERNAL::
comm_errhandler_fn [[BR]] TYPE(MPI_Errhandler), INTENT(OUT)::
errhandler [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]](This routine specification was changed by Ticket #252-W). [[BR]] [[BR]] SUBROUTINE MPI_Comm_get_errhandler(comm, errhandler, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Errhandler), INTENT(OUT)::
errhandler [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_set_errhandler(comm, errhandler, ierror) [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Errhandler), INTENT(IN)::
errhandler [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Errhandler_free(errhandler, ierror) [[BR]] TYPE(MPI_Errhandler), INTENT(INOUT)::
errhandler [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Error_class(errorcode, errorclass, ierror) [[BR]] INTEGER, INTENT(IN)::
errorcode [[BR]] INTEGER, INTENT(OUT)::
errorclass [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Error_string(errorcode, string, resultlen, ierror) [[BR]] INTEGER, INTENT(IN)::
errorcode [[BR]] CHARACTER(LEN=_), INTENT(OUT)::
string [[BR]] INTEGER, INTENT(OUT)::
resultlen [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_call_errhandler(fh, errorcode, ierror)* [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER, INTENT(IN)::
errorcode [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_File_create_errhandler(file_errhandler_fn, errhandler, ierror) [[BR]] EXTERNAL::
file_errhandler_fn [[BR]] TYPE(MPI_Errhandler), INTENT(OUT)::
errhandler [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]](This routine specification was changed by Ticket #252-W). [[BR]] [[BR]] SUBROUTINE MPI_File_get_errhandler(file, errhandler, ierror) [[BR]] TYPE(MPI_File), INTENT(IN)::
file [[BR]] TYPE(MPI_Errhandler), INTENT(OUT)::
errhandler [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_File_set_errhandler(file, errhandler, ierror) [[BR]] TYPE(MPI_File), INTENT(IN)::
file [[BR]] TYPE(MPI_Errhandler), INTENT(IN)::
errhandler [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Finalize(ierror) [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Finalized(flag, ierror) [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Free_mem(base, ierror) [[BR]] USE, INTRINSIC::
ISO_C_BINDING [[BR]] TYPE(C_PTR), INTENT(IN)::
base [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Get_processor_name( name, resultlen, ierror) [[BR]] CHARACTER(LEN=_), INTENT(OUT)::
name [[BR]] INTEGER, INTENT(OUT)::
resultlen [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Get_version(version, subversion, ierror)* [[BR]] INTEGER, INTENT(OUT)::
version, subversion [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Init(ierror) [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Initialized(flag, ierror) [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Win_call_errhandler(win, errorcode, ierror) [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] INTEGER, INTENT(IN)::
errorcode [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Win_create_errhandler(win_errhandler_fn, errhandler, ierror) [[BR]] EXTERNAL::
win_errhandler_fn [[BR]] TYPE(MPI_Errhandler), INTENT(OUT)::
errhandler [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]](This routine specification was changed by Ticket #252-W). [[BR]] [[BR]] SUBROUTINE MPI_Win_get_errhandler(win, errhandler, ierror) [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] TYPE(MPI_Errhandler), INTENT(OUT)::
errhandler [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Win_set_errhandler(win, errhandler, ierror) [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] TYPE(MPI_Errhandler), INTENT(IN)::
errhandler [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]]A.3.7 The Info Object Fortran Bindings
-_SUBROUTINE MPI_Infocreate(info, ierror)* [[BR]] TYPE(MPI_Info), INTENT(OUT)
::
info [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Info_delete(info, key, ierror) [[BR]] TYPE(MPIInfo), INTENT(IN)::
info [[BR]] CHARACTER(LEN=), INTENT(IN)::
key [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Info_dup(info, newinfo, ierror)* [[BR]] TYPE(MPI_Info), INTENT(IN)::
info [[BR]] TYPE(MPI_Info), INTENT(OUT)::
newinfo [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Info_free(info, ierror) [[BR]] TYPE(MPI_Info), INTENT(INOUT)::
info [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Info_get(info, key, valuelen, value, flag, ierror) [[BR]] TYPE(MPIInfo), INTENT(IN)::
info [[BR]] CHARACTER(LEN=), INTENT(IN)::
key [[BR]] INTEGER, INTENT(IN)::
valuelen [[BR]] CHARACTER(LEN=_), INTENT(OUT)::
value [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Info_get_nkeys(info, nkeys, ierror) [[BR]] TYPE(MPI_Info), INTENT(IN)::
info [[BR]] INTEGER, INTENT(OUT)::
nkeys [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Info_get_nthkey(info, n, key, ierror) [[BR]] TYPE(MPIInfo), INTENT(IN)::
info [[BR]] INTEGER, INTENT(IN)::
n [[BR]] CHARACTER(LEN=), INTENT(OUT)::
key [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Info_get_valuelen(info, key, valuelen, flag, ierror)* [[BR]] TYPE(MPIInfo), INTENT(IN)::
info [[BR]] CHARACTER(LEN=), INTENT(IN)::
key [[BR]] INTEGER, INTENT(OUT)::
valuelen [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Info_set(info, key, value, ierror)* [[BR]] TYPE(MPI_Info), INTENT(IN)::
info [[BR]] CHARACTER(LEN=*), INTENT(IN)::
key, value [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]]A.3.8 Process Creation and Management Fortran Bindings
-_SUBROUTINE MPI_Close_port(portname, ierror)* [[BR]] CHARACTER(LEN=_), INTENT(IN)
::
port_name [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Comm_accept(port_name, info, root, comm, newcomm, ierror)* [[BR]] CHARACTER(LEN=_), INTENT(IN)::
port_name [[BR]] TYPE(MPI_Info), INTENT(IN)::
info [[BR]] INTEGER, INTENT(IN)::
root [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Comm), INTENT(OUT)::
newcomm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Comm_connect(port_name, info, root, comm, newcomm, ierror)* [[BR]] CHARACTER(LEN=_), INTENT(IN)::
port_name [[BR]] TYPE(MPI_Info), INTENT(IN)::
info [[BR]] INTEGER, INTENT(IN)::
root [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Comm), INTENT(OUT)::
newcomm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Comm_disconnect(comm, ierror)* [[BR]] TYPE(MPI_Comm), INTENT(INOUT)::
comm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_get_parent(parent, ierror) [[BR]] TYPE(MPI_Comm), INTENT(OUT)::
parent [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_join(fd, intercomm, ierror) [[BR]] INTEGER, INTENT(IN)::
fd [[BR]] TYPE(MPI_Comm), INTENT(OUT)::
intercomm [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Comm_spawn(command, argv, maxprocs, info, root, comm, intercomm, array_of_errcodes, ierror) [[BR]] CHARACTER(LEN=), INTENT(IN)::
command, argv() ! optional by overloading [[BR]] INTEGER, INTENT(IN)::
maxprocs, root [[BR]] TYPE(MPI_Info), INTENT(IN)::
info [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Comm), INTENT(OUT)::
intercomm [[BR]] INTEGER, INTENT(OUT)::
array_oferrcodes() ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Comm_spawn_multiple(count, array_of_commands, array_of_argv, array_of_maxprocs, array_of_info, root, comm, intercomm, array_of_errcodes, ierror)* [[BR]] INTEGER, INTENT(IN)::
count, array_ofmaxprocs(), root [[BR]] CHARACTER(LEN=_), INTENT(IN)::
array_ofcommands(), array_of_argv(count, ) ! optional by overloading [[BR]] TYPE(MPI_Info), INTENT(IN)::
array_ofinfo() [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Comm), INTENT(OUT)::
intercomm [[BR]] INTEGER, INTENT(OUT)::
array_oferrcodes() ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Lookup_name(service_name, info, portname, ierror)\ [[BR]] CHARACTER(LEN=), INTENT(IN)::
service_name [[BR]] TYPE(MPIInfo), INTENT(IN)::
info [[BR]] CHARACTER(LEN=), INTENT(OUT)::
port_name [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Open_port(info, port_name, ierror) [[BR]] TYPE(MPIInfo), INTENT(IN)::
info [[BR]] CHARACTER(LEN=), INTENT(OUT)::
port_name [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Publish_name(service_name, info, port_name, ierror)* [[BR]] TYPE(MPIInfo), INTENT(IN)::
info [[BR]] CHARACTER(LEN=), INTENT(IN)::
service_name, port_name [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Unpublish_name(service_name, info, port_name, ierror)* [[BR]] CHARACTER(LEN=*), INTENT(IN)::
service_name, port_name [[BR]] TYPE(MPI_Info), INTENT(IN)::
info [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]]A.4.9 One-Sided Communications Fortran Bindings
-_SUBROUTINE MPI_Accumulate(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, targetdatatype, op, win, ierror)* [[BR]] TYPE(_), DIMENSION(..)
::
origin_addr [[BR]] INTEGER, INTENT(IN)::
origin_count, target_rank, target_count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
origin_datatype [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
target_disp [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
target_datatype [[BR]] TYPE(MPI_Op), INTENT(IN)::
op [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Get(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
origin_addr [[BR]] INTEGER, INTENT(IN)::
origin_count, target_rank, target_count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
origin_datatype [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
target_disp [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
target_datatype [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Put(origin_addr, origin_count, origin_datatype, target_rank, target_disp, target_count, target_datatype, win, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
origin_addr [[BR]] INTEGER, INTENT(IN)::
origin_count, target_rank, target_count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
origin_datatype [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
target_disp [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
target_datatype [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Win_complete(win, ierror)* [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Win_create(base, size, disp_unit, info, comm, win, ierror) [[BR]] TYPE(_), DIMENSION(..)::
base [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
size [[BR]] INTEGER, INTENT(IN)::
disp_unit [[BR]] TYPE(MPI_Info), INTENT(IN)::
info [[BR]] TYPE(MPI_Comm), INTENT(IN)::
comm [[BR]] TYPE(MPI_Win), INTENT(OUT)::
win [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Win_fence(assert, win, ierror)* [[BR]] INTEGER, INTENT(IN)::
assert [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Win_free(win, ierror) [[BR]] TYPE(MPI_Win), INTENT(INOUT)::
win [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Win_get_group(win, group, ierror) [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] TYPE(MPI_Group), INTENT(OUT)::
group [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Win_lock(lock_type, rank, assert, win, ierror) [[BR]] INTEGER, INTENT(IN)::
lock_type, rank, assert [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Win_post(group, assert, win, ierror) [[BR]] TYPE(MPI_Group), INTENT(IN)::
group [[BR]] INTEGER, INTENT(IN)::
assert [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Win_start(group, assert, win, ierror) [[BR]] TYPE(MPI_Group), INTENT(IN)::
group [[BR]] INTEGER, INTENT(IN)::
assert [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Win_test(win, flag, ierror) [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Win_unlock(rank, win, ierror) [[BR]] INTEGER, INTENT(IN)::
rank [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Win_wait(win, ierror) [[BR]] TYPE(MPI_Win), INTENT(IN)::
win [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]]A.4.10 External Interfaces Fortran Bindings
-_SUBROUTINE MPI_Grequestcomplete(request, ierror)* [[BR]] TYPE(MPI_Request), INTENT(IN)
::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Grequest_start(query_fn, free_fn, cancel_fn, extra_state, request, ierror) [[BR]] EXTERNAL::
query_fn, free_fn, cancel_fn [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
extra_state [[BR]] TYPE(MPI_Request), INTENT(OUT)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Init_thread(required, provided, ierror) [[BR]] INTEGER, INTENT(IN)::
required [[BR]] INTEGER, INTENT(OUT)::
provided [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Is_thread_main(flag, ierror) [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Query_thread(provided, ierror) [[BR]] INTEGER, INTENT(OUT)::
provided [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Status_set_cancelled(status, flag, ierror) [[BR]] TYPE(MPI_Status), INTENT(INOUT)::
status [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Status_set_elements(status, datatype, count, ierror) [[BR]] TYPE(MPI_Status), INTENT(INOUT)::
status [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER, INTENT(IN)::
count [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]]A.4.11 I/O Fortran Bindings
-_SUBROUTINE MPI_Fileclose(fh, ierror)* [[BR]] TYPE(MPI_File), INTENT(INOUT)
::
fh [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_File_delete(filename, info, ierror) [[BR]] CHARACTER(LEN=_), INTENT(IN)::
filename [[BR]] TYPE(MPI_Info), INTENT(IN)::
info [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_get_amode(fh, amode, ierror)* [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER, INTENT(OUT)::
amode [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_File_get_atomicity(fh, flag, ierror) [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] LOGICAL, INTENT(OUT)::
flag [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_File_get_byte_offset(fh, offset, disp, ierror) [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER(KIND=MPI_OFFSET_KIND), INTENT(IN)::
offset [[BR]] INTEGER(KIND=MPI_OFFSET_KIND), INTENT(OUT)::
disp [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_File_get_group(fh, group, ierror) [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] TYPE(MPI_Group), INTENT(OUT)::
group [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_File_get_info(fh, info_used, ierror) [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] TYPE(MPI_Info), INTENT(OUT)::
info_used [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_File_get_position(fh, offset, ierror) [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER(KIND=MPI_OFFSET_KIND), INTENT(OUT)::
offset [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_File_get_position_shared(fh, offset, ierror) [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER(KIND=MPI_OFFSET_KIND), INTENT(OUT)::
offset [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_File_get_size(fh, size, ierror) [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER(KIND=MPI_OFFSET_KIND), INTENT(OUT)::
size [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_File_get_type_extent(fh, datatype, extent, ierror) [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(OUT)::
extent [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_File_get_view(fh, disp, etype, filetype, datarep, ierror) [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER(KIND=MPI_OFFSET_KIND), INTENT(OUT)::
disp [[BR]] TYPE(MPI_Datatype), INTENT(OUT)::
etype [[BR]] TYPE(MPIDatatype), INTENT(OUT)::
filetype [[BR]] CHARACTER(LEN=), INTENT(OUT)::
datarep [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_iread(fh, buf, count, datatype, request, ierror)* [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Request), INTENT(OUT)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_iread_at(fh, offset, buf, count, datatype, request, ierror)* [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER(KIND=MPI_OFFSETKIND), INTENT(IN)::
offset [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Request), INTENT(OUT)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_iread_shared(fh, buf, count, datatype, request, ierror)* [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Request), INTENT(OUT)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_iwrite(fh, buf, count, datatype, request, ierror)* [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Request), INTENT(OUT)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_iwrite_at(fh, offset, buf, count, datatype, request, ierror)* [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER(KIND=MPI_OFFSETKIND), INTENT(IN)::
offset [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Request), INTENT(OUT)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_iwrite_shared(fh, buf, count, datatype, request, ierror)* [[BR]] TYPE(_), DIMENSION(..)::
buf [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Request), INTENT(OUT)::
request [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_open(comm, filename, amode, info, fh, ierror)* [[BR]] TYPE(MPIComm), INTENT(IN)::
comm [[BR]] CHARACTER(LEN=), INTENT(IN)::
filename [[BR]] INTEGER, INTENT(IN)::
amode [[BR]] TYPE(MPI_Info), INTENT(IN)::
info [[BR]] TYPE(MPI_File), INTENT(OUT)::
fh [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_preallocate(fh, size, ierror)* [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER(KIND=MPI_OFFSET_KIND), INTENT(IN)::
size [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_File_read(fh, buf, count, datatype, status, ierror) [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_read_all(fh, buf, count, datatype, status, ierror)* [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_read_all_begin(fh, buf, count, datatype, ierror)* [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_read_all_end(fh, buf, status, ierror)* [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_read_at(fh, offset, buf, count, datatype, status, ierror)* [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER(KIND=MPI_OFFSETKIND), INTENT(IN)::
offset [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_read_at_all(fh, offset, buf, count, datatype, status, ierror)* [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER(KIND=MPI_OFFSETKIND), INTENT(IN)::
offset [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_read_at_all_begin(fh, offset, buf, count, datatype, ierror)* [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER(KIND=MPI_OFFSETKIND), INTENT(IN)::
offset [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_read_at_all_end(fh, buf, status, ierror)* [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_read_ordered(fh, buf, count, datatype, status, ierror)* [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_read_ordered_begin(fh, buf, count, datatype, ierror)* [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_read_ordered_end(fh, buf, status, ierror)* [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_read_shared(fh, buf, count, datatype, status, ierror)* [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_seek(fh, offset, whence, ierror)* [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER(KIND=MPI_OFFSET_KIND), INTENT(IN)::
offset [[BR]] INTEGER, INTENT(IN)::
whence [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_File_seek_shared(fh, offset, whence, ierror) [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER(KIND=MPI_OFFSET_KIND), INTENT(IN)::
offset [[BR]] INTEGER, INTENT(IN)::
whence [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_File_set_atomicity(fh, flag, ierror) [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] LOGICAL, INTENT(IN)::
flag [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_File_set_info(fh, info, ierror) [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] TYPE(MPI_Info), INTENT(IN)::
info [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_File_set_size(fh, size, ierror) [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER(KIND=MPI_OFFSET_KIND), INTENT(IN)::
size [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_File_set_view(fh, disp, etype, filetype, datarep, info, ierror) [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER(KIND=MPI_OFFSET_KIND), INTENT(IN)::
disp [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
etype [[BR]] TYPE(MPIDatatype), INTENT(IN)::
filetype [[BR]] CHARACTER(LEN=), INTENT(IN)::
datarep [[BR]] TYPE(MPI_Info), INTENT(IN)::
info [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_sync(fh, ierror)* [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_File_write(fh, buf, count, datatype, status, ierror) [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_write_all(fh, buf, count, datatype, status, ierror)* [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_write_all_begin(fh, buf, count, datatype, ierror)* [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_write_all_end(fh, buf, status, ierror)* [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_write_at(fh, offset, buf, count, datatype, status, ierror)* [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER(KIND=MPI_OFFSETKIND), INTENT(IN)::
offset [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_write_at_all(fh, offset, buf, count, datatype, status, ierror)* [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER(KIND=MPI_OFFSETKIND), INTENT(IN)::
offset [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_write_at_all_begin(fh, offset, buf, count, datatype, ierror)* [[BR]] TYPE(MPI_File), INTENT(IN)::
fh [[BR]] INTEGER(KIND=MPI_OFFSETKIND), INTENT(IN)::
offset [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_write_at_all_end(fh, buf, status, ierror)* [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_write_ordered(fh, buf, count, datatype, status, ierror)* [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_write_ordered_begin(fh, buf, count, datatype, ierror)* [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_write_ordered_end(fh, buf, status, ierror)* [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_File_write_shared(fh, buf, count, datatype, status, ierror)* [[BR]] TYPE(MPIFile), INTENT(IN)::
fh [[BR]] TYPE(), DIMENSION(..)::
buf [[BR]] INTEGER, INTENT(IN)::
count [[BR]] TYPE(MPI_Datatype), INTENT(IN)::
datatype [[BR]] TYPE(MPI_Status), INTENT(OUT)::
status ! optional by overloading [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Register_datarep(datarep, read_conversion_fn, write_conversion_fn, dtype_file_extent_fn, extra_state, ierror)* [[BR]] CHARACTER(LEN=*), INTENT(IN)::
datarep [[BR]] EXTERNAL::
read_conversion_fn, write_conversion_fn, dtype_file_extent_fn [[BR]] INTEGER(KIND=MPI_ADDRESS_KIND), INTENT(IN)::
extra_state [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]]A.4.12 Language Bindings Fortran Bindings
-_SUBROUTINE MPISizeof(x, size, ierror)* [[BR]] TYPE(_)
::
x [[BR]] INTEGER, INTENT(OUT)::
size [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] _SUBROUTINE MPI_Type_create_f90_complex(p, r, newtype, ierror)* [[BR]] INTEGER, INTENT(IN)::
p, r [[BR]] TYPE(MPI_Datatype), INTENT(OUT)::
newtype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_create_f90_integer(r, newtype, ierror) [[BR]] INTEGER, INTENT(IN)::
r [[BR]] TYPE(MPI_Datatype), INTENT(OUT)::
newtype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_create_f90_real(p, r, newtype, ierror) [[BR]] INTEGER, INTENT(IN)::
p, r [[BR]] TYPE(MPI_Datatype), INTENT(OUT)::
newtype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]] [[BR]] SUBROUTINE MPI_Type_match_size(typeclass, size, datatype, ierror) [[BR]] INTEGER, INTENT(IN)::
typeclass, size [[BR]] TYPE(MPI_Datatype), INTENT(OUT)::
datatype [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]](This routine specification was changed by Ticket #252-W). [[BR]] [[BR]]A.3.13 Profiling Interface Fortran Bindings
-_SUBROUTINE MPIPcontrol(level)* [[BR]] INTEGER, INTENT(IN)
::
level [[BR]] END SUBROUTINE [[BR]] [[BR]]Impact on Implementations
See previous tickets.
Impact on Applications / Users
See previous tickets.
Alternative Solutions
If one wants to implement also the currently deprecated functions with the new interface, then these interfaces are used:
A.4.14 Deprecated Fortran Bindings
-_SUBROUTINE MPIAddress(location, address, ierror)* [[BR]] TYPE(*), DIMENSION(..)
::
location [[BR]] INTEGER, INTENT(OUT)::
address [[BR]] INTEGER, OPTIONAL, INTENT(OUT)::
ierror [[BR]] END SUBROUTINE [[BR]]...
(snip - see original #248-T)
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]] yyy.
249-U: Alternative formulation for Section 16.2 Fortran SupportSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Description
-Major decisions in this ticket:*
mpif08
.-Details:*
Ticket #230-B overcomes the history-based formulations of
In this ticket, an alternative solution is presented that keeps this history-based view and adds and "Advanced Fortran Support"
It is proposed, not to vote for this ticket. Ticket #230-B is the better solution.
Extended Scope
None.
History
Proposed Solution
'''MPI-2.2, Chapter 16.2, Fortran Support: [[BR]] MPI-2.2, Section 16.2.1 Overview, page 480, lines 23-25 read'''
The Fortran MPI-2 language bindings have been designed to be compatible with the Fortran 90 standard (and later). These bindings are in most cases compatible with Fortran 77, implicit-style interfaces.
-but should read*
The Fortran MPI
-2language bindings have been designed to be compatible with the Fortran 90 standard (and later). These bindings are in most cases compatible with Fortran 77, implicit-style interfaces.-MPI-2.2, Section 16.2.1 Overview, page 480, lines 33-47 read*
MPI defines two levels of Fortran support, described in Sections 16.2.3 and 16.2.4. In the rest of this section, "Fortran" and "Fortran 90" shall refer to "Fortran 90" and its successors, unless qualified.
Extended Fortran Support An implementation with this level of Fortran support provides Basic Fortran Support plus additional features that specifically support Fortran 90, as described in Section 16.2.4.
A compliant MPI-2 implementation providing a Fortran interface must provide Extended Fortran Support unless the target compiler does not support modules or KIND- parameterized types.
-but should read*
MPI defines
twothree levels of Fortran support, described in Sections 16.2.3,~~ and~~ 16.2.4, and 16.2.6. In the rest of this section, "Fortran" and "Fortran 90" shall refer to "Fortran 90" and its successors, unless qualified.Extended Fortran Support An implementation with this level of Fortran support provides Basic Fortran Support plus additional features that specifically support Fortran 90, as described in Section 16.2.4.
__3. Advanced Fortran Support An implementation with this level of Fortran support provides Extended Fortran Support plus additional features that partially require Fortran 2008, as described in Section 16.2.6.
A compliant MPI-2 implementation providing a Fortran interface must provide Extended Fortran Support unless the target compiler does not support modules or KIND- parameterized types.
*A compliant MPI-3 implementation providing a Fortran interface must provide Advanced Fortran Support unless the target compiler does not support explicit interfaces with `TYPE(), DIMENSION(..)`.**
-After MPI-2.2, Section 16.2.5, page 497, line 19, the following section is added:* [[BR]]The ticket numbers in parenthesis (#xxx-X) indicate sentences that are removed if the appropriate ticket is not voted in.
16.2.6 Advanced Fortran Support
The include file
mpif.f
is deprecated (#233-E). The modulempi
guarantees compile-time argument checking except for all choice arguments, i.e., the buffers (#232-D). A new modulempi_f08
is introduced. This module guarantees compile-time argument checking. All handles are defined with named types (instead of INTEGER handles in modulempi
) (#231-C). The buffers are declared with the new Fortran 2008 feature assumed type and assumed rank "TYPE(*), DIMENSION(..)
" and with this, non-contiguous sub-arrays are now valid also in nonblocking routines (#234-F). With this module new Fortran 2008 definitions are added for each MPI routine (#247-S), except for routines that are deprecated (#241-M). Each argument is added an INTENT=IN, OUT, or INOUT if appropriate (#242-N). Allstatus
andarray_of_statuses
output arguments are declared asoptional
(only with #244-P Alternative Solution). Allierror
output arguments are declared asoptional
(#239-K). Allierror
output arguments are declared asoptional
, except for user-defined callback functions (e.g., comm_copy_attr_fn) and their predefined callbacks (e.g., MPI_NULL_COPY_FN). (#239-K)If the target compiler does not support explicit interfaces with assumed type and assumed rank, then the use of non-contiguous sub-arrays in nonblocking calls may be restricted as with module
mpi
(#234-F).-Advice to implementors. (#232-D) * In module
mpi
, with most compilers the choice argument can be implemented with the following explicit interface:It is explicitly allowed that the choice arguments are implemented in the same way as with module
mpoi_f08
. -(End of advice to implementors.)*-Rationale. For user-defined callback functions (e.g., comm_copy_attr_fn) and their predefined callbacks (e.g., MPI_NULL_COPY_FN), the ierror argument is not optional, i.e., these user-defined functions need not to check whether the MPI library calls these routine with or without an actual ierror output argument. -(End of rationale.) (#239-K)
Impact on Implementations
See Ticket #230-B.
Impact on Applications / Users
See Ticket #230-B.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]] yyy.
250-V: Minor Corrections in Fortran InterfacesSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: yes by acclamation.
Description
-Major decisions in this ticket:*
request
in the Fortran binding type declaration part ofMPI_SEND_INIT
andMPI_BSEND_INIT
-Details:*
_FUNCTION
and all dummy arguments and predefined values for such callback functions end with_FN
. There is only one set of errors in the Fortran interfaces in the document. To change these names is only an editing change, not a modification of the MPI interface, because these Fortran names are not part of mpif.h. To be modified:Extended Scope
None.
History
Proposed Solution
-_MPI-2.2, Section 3.9 Persistent Communication Requests, in the Fortran declaration of
MPI_SEND_INIT
, page 70, line 3 reads_*INTEGER REQUEST, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR
-but should read*
INTEGER ~~REQUEST, ~~COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR
-_MPI-2.2, Section 3.9 Persistent Communication Requests, in the Fortran declaration of
MPI_BSEND_INIT
, page 70, line 28 reads_*INTEGER REQUEST, COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR
-but should read*
INTEGER ~~REQUEST, ~~COUNT, DATATYPE, DEST, TAG, COMM, REQUEST, IERROR
-_MPI-2.2, Section 5.9.7 Process-local reduction, in the Fortran declaration of
MPI_REDUCE_LOCAL
, page 177, line 14 reads_*MPI_REDUCE_LOCAL(INBUF, INOUBUF, COUNT, DATATYPE, OP, IERROR)
-but should read*
MPI_REDUCE_LOCAL(INBUF, INOUTBUF, COUNT, DATATYPE, OP, IERROR)
-MPI-2.2, Section 6.6.2 Intercommunicator Operations, page 220, lines 47-48 read*
MPI_INTERCOMM_MERGE(INTERCOMM, HIGH, INTRACOMM, IERROR) INTEGER INTERCOMM, INTRACOMM, IERROR
-but should read*
MPI_INTERCOMM_MERGE(INTERCOMM, HIGH, NEWINTRACOMM, IERROR) INTEGER INTERCOMM, NEWINTRACOMM, IERROR
'''MPI-2.2, Section 6.7.2 Communicators, page 226, line 44 [[BR]] and Appendix A.1.1 Constants, page 520, lines 14, 17, one should modify (3 times):'''
COMM_COPY_ATTR_FN --> COMM_COPY_ATTR_FUNCTION
'''MPI-2.2, Section 6.7.2 Communicators, page 227, line 5 [[BR]] and Appendix A.1.1 Constants, page 520, line 20, one should modify (2 times):'''
COMM_DELETE_ATTR_FN --> COMM_DELETE_ATTR_FUNCTION
'''MPI-2.2, Section 6.7.3 Windows, page 231, line 40 [[BR]] and Appendix A.1.1 Constants, page 520, lines 23, 26, one should modify (3 times):'''
WIN_COPY_ATTR_FN --> WIN_COPY_ATTR_FUNCTION
'''MPI-2.2, Section 6.7.3 Windows, page 232, line 1 [[BR]] and Appendix A.1.1 Constants, page 520, line 29, one should modify (2 times):'''
WIN_DELETE_ATTR_FN --> WIN_DELETE_ATTR_FUNCTION
'''MPI-2.2, Section 6.7.4 Datatypes, page 234, line 28 [[BR]] and Appendix A.1.1 Constants, page 520, lines 32, 35, one should modify (3 times):'''
TYPE_COPY_ATTR_FN --> TYPE_COPY_ATTR_FUNCTION
'''MPI-2.2, Section 6.7.4 Datatypes, page 234, line 36 [[BR]] and Appendix A.1.1 Constants, page 520, line 38, one should modify (2 times):'''
TYPE_DELETE_ATTR_FN --> TYPE_DELETE_ATTR_FUNCTION
Impact on Implementations
Correction of module mpi and mpif.h.
Impact on Applications / Users
None.
Alternative Solutions
Entry for the Change Log
MPI-2.2, Section xxxx on page xxx.[[BR]] yyy.
252-W: Substituting dummy argument name "type" by "datatype" or "oldtype", and othersSee Ticket #229-A for an overview on the New MPI-3 Fortran Support.
Votes
Straw vote Oct. 11, 2010: yes by acclamation.
Description
-Major decisions in this ticket:*
-Details:*
The problem with "type" arises with the following MPI library routines:
The problem with "fuction" arises with the following MPI library routines:
The change "_FN" --> "_FUNCTION" in callback prototype names is necessary to have same names in C and Fortran, and to have a clear distinguishing between prototype names (with _FUNCTION) and predefined arguments (always with _FN).
With new MPI-3.0 explicit Fortran interfaces, applications can freely choose between positional argument lists and keyword based argument lists. For this, the first time, the names of the dummy arguments are relevant. Therefore, the dummy argument names should not be in a conflicht with language keywords. Current Fortran can resolve such conflicts, but it is bad programming practice to use variable names identical to Fortran keywords. In the MPI-2.2 specification, this problem arises with the Fortran keyword "TYPE".
In addition, in all language bindings, the dummy argument names should be identical to the language-independent dummy argument names.
MPI-3.0 will be the last time, that dummy argument names can be changed without any conflicts for existing application programs. In the C binding, dummy argument name changes do not matter.
Extended Scope
None.
History
Proposed Solution
For MPI_Type_dup(type, newtype), [[BR]] on MPI-2.2 Sect. 4.1.10, page 100, lines 36, 37, 41, 42, 43, and page 101, lines 3, 7, [[BR]] the dummy argument name type must be substituted (7 times) by oldtype.
...
(snip - see original #252-W)
Impact on Implementations
The dummy argument names in the header files mpi.h and mpif.h and in the Fortran modules mpi and mpif_08 must be changed. The C library routines need not to be changed.
Impact on Applications / Users
None.
Alternative Solutions
Entry for the Change Log
...
(snip - see original #252-W)