jrs65 / scalapy

A python wrapper around ScaLAPACK
32 stars 12 forks source link

Strange initmpi behavior #12

Open zuoshifan opened 9 years ago

zuoshifan commented 9 years ago

While the following test script can pass the test,

from mpi4py import MPI
from scalapy import core

comm = MPI.COMM_WORLD

rank = comm.rank
size = comm.size

if size != 4:
    raise Exception("Test needs 4 processes.")

def test_Dup():

    core.initmpi([2, 2], block_shape=[5, 5]) # comment it out to see what happens

    newcomm = comm.Dup()
    core.initmpi([2, 2], block_shape=[5, 5], comm=newcomm)

if __name__ == '__main__':
    test_Dup()

it would get failed if I comment out core.initmpi([2, 2], block_shape=[5, 5]). The failed message is

scalapy.blacs.BLACSException: Grid initialisation failed.

This is strange because I think grid initialization on newcomm should not depend on the original grid initialization on MPI.COMM_WORLD. I don't know it is a bug or not.

zuoshifan commented 9 years ago

Also see http://stackoverflow.com/questions/22488233/blacs-context-value-and-multiple-mpi-communicators

zuoshifan commented 9 years ago

Seems one must first initialize a BLACS grid on MPI_COMM_WORLD before one can further initailize BLACS grids on other MPI communicators. Why?