Open jeffhammond opened 1 year ago
There are at least 3 cases to consider, where we assume the buffer is a non-contiguous subarray.
The datatype argument corresponds to an element of the array or is a contiguous datatype AND the total number of elements expressed by (count,datatype)
evenly divides the buffer.
Easy example:
integer, dimension(4,4) :: A
call MPI_Bcast(A(1:3,1:3), 9, MPI_INTEGER, 0, MPI_COMM_WORLD)
Hard example:
integer, dimension(4,4) :: A
call MPI_Bcast(A(1:3,1:3), 6, MPI_INTEGER, 0, MPI_COMM_WORLD)
The latter works, because the count works out to an even multiple of the dimensions up to the last one. We can still use vector datatypes here.
The datatype argument corresponds to an element of the array or is a contiguous datatype but the total number of elements expressed by (count,datatype)
does not evenly divide the buffer.
Example:
integer, dimension(4,4) :: A
call MPI_Bcast(A(1:3,1:3), 7, MPI_INTEGER, 0, MPI_COMM_WORLD)
This one is hard, because we need to use an indexed type to express subblocks of different sizes.
The datatype argument is a non-contiguous derived datatype.
Example:
integer :: A(100)
type(MPI_Datatype) :: v
call MPI_Type_vector(25,1,2,MPI_INTEGER,v)
call MPI_Type_commit(v)
call MPI_Bcast( A(2:100:2), 1, v, 0, MPI_COMM_WORLD)
We need to implement non-contiguous support in CFI.
https://github.com/jeffhammond/vapaa/blob/main/tests/test_vector_noncontig.F90#L34 tests this.