microsoft / Microsoft-MPI

Microsoft MPI
MIT License
244 stars 74 forks source link

Immediate barrier not completed on all ranks when interleaved with point to point communication #35

Open AndrewGaspar opened 4 years ago

AndrewGaspar commented 4 years ago

This issue was encountered in one of the tests for rsmpi. A C++ version of the test can be found here: https://gist.github.com/AndrewGaspar/c75a1f0db91aa4633492c563cf107c4b

This reproduces with the latest 10.1.1 release from the GitHub Releases page.

Essentially, it appears that progress isn't always made on barriers when interleaved with point-to-point communication. This only reproduces with 3 or more ranks. On my system, this is the result of running this program with 3 ranks:

> mpiexec.exe -n 3 .\repro.exe
1: Sending 1
2: Sending 1
1: Sent 1
2: Sent 1
2: Sending 2
2: Sent 2
1: Sending 2
1: Sent 2
0: Received 1 from 1
0: Received 2 from 1
0: Received 1 from 2
0: Received 2 from 2
1: Sending 3
1: Sent 3
0: Received 3 from 1

In this specific example, you can see the immediate barrier never completes for rank 2. However, the standard (MPI 3.1) seems to indicate that collectives, including Ibarrier should be able to make progress at the same time as point to point communications.

jerryliu20d commented 1 year ago

I have a similar problem when using MPI_Barrier. The print sequence is incorrect but the other calculation is correctly blocked by the MPI_Barrier.