Open zuoshifan opened 9 years ago
While the following test script can pass the test,
from mpi4py import MPI from scalapy import core comm = MPI.COMM_WORLD rank = comm.rank size = comm.size if size != 4: raise Exception("Test needs 4 processes.") def test_Dup(): core.initmpi([2, 2], block_shape=[5, 5]) # comment it out to see what happens newcomm = comm.Dup() core.initmpi([2, 2], block_shape=[5, 5], comm=newcomm) if __name__ == '__main__': test_Dup()
it would get failed if I comment out core.initmpi([2, 2], block_shape=[5, 5]). The failed message is
core.initmpi([2, 2], block_shape=[5, 5])
scalapy.blacs.BLACSException: Grid initialisation failed.
This is strange because I think grid initialization on newcomm should not depend on the original grid initialization on MPI.COMM_WORLD. I don't know it is a bug or not.
newcomm
MPI.COMM_WORLD
Also see http://stackoverflow.com/questions/22488233/blacs-context-value-and-multiple-mpi-communicators
Seems one must first initialize a BLACS grid on MPI_COMM_WORLD before one can further initailize BLACS grids on other MPI communicators. Why?
While the following test script can pass the test,
it would get failed if I comment out
core.initmpi([2, 2], block_shape=[5, 5])
. The failed message isThis is strange because I think grid initialization on
newcomm
should not depend on the original grid initialization onMPI.COMM_WORLD
. I don't know it is a bug or not.