open-mpi / ompi

Open MPI main development repository
https://www.open-mpi.org
Other
2.12k stars 856 forks source link

Correctness failure when using BTL RDMA #3685

Closed vspetrov closed 6 years ago

vspetrov commented 7 years ago

Thank you for taking the time to submit an issue!

Background information

Possibly related to the "patcher" memory framework

What version of Open MPI are you using? (e.g., v1.10.3, v2.1.0, git branch name and hash, etc.)

v2.0.x v2.x master

Describe how Open MPI was installed (e.g., from a source/distribution tarball, from a git clone, from an operating system distribution package, etc.)

Built from sources, cloned from github

Please describe the system on which you are running


Details of the problem

Multithreaded correctness test (attached mt_stress.zip ) fails with OMPI. Reproduced on 2 nodes.

shell$ mpirun -np 3 --map-by node -mca pml ob1 -mca btl openib,self   `nif 50`    -mca coll ^hcoll ./mt_stress 1497004394

...
Splitting id 124
CORRECTNESS ERROR: id 124, TEST_TYPE 2, pos 3692, value 3, expected 6, dtype MPI_INT, root 0, rank 2, count 15045, comm_size 3, color 1
CORRECTNESS ERROR: id 124, TEST_TYPE 2, pos 3692, value 3, expected 6, dtype MPI_INT, root 0, rank 0, count 15045, comm_size 3, color 1
CORRECTNESS ERROR: id 124, TEST_TYPE 2, pos 3692, value 3, expected 6, dtype MPI_INT, root 0, rank 1, count 15045, comm_size 3, color 1
Splitting id 125
Splitting id 126
...

This is an allreduce failure. After some debug i narrowed it down to the single p2p inside allreduce. One ranks sends the data to the other side, but the data is received corrupted for some reason.

The test would pass if "-mca mpi_leave_pinned 0" OR if the ompi is built without memory manager support (--without-memory-manager). This is why my suspicion goes to "patcher" memory framework.

Additionally, the same issues are observed with pml yalla (mellanox mxm based p2p). Again disabling mem notifications (MXM_MEM_ON_DEMAND_MAP=n) helps.

Since "patcher" was not present in ompi_v1.10 i wanted to try test with that version. btl openib wouldn't work since it didn't support mpi_thread_multiple in 1.10 however, pml yalla works w/o errors with 1.10.

jladd-mlnx commented 7 years ago

@markalle @jjhursey @gpaulsen is this something you've observed in your testing?

gpaulsen commented 7 years ago

Yes, we're struggling with some issues around patcher right now as well on ppc64le. Still trying to root cause.

jladd-mlnx commented 7 years ago

Thanks, @gpaulsen. This was actually reported to us by an IBM test engineer who thought it was an HCOLL issue. @vspetrov 's report is what we've been able to make of it - that it's most likely patcher related and not an HCOLL bug.

alsrgv commented 7 years ago

Thanks @vspetrov for reporting this issue!

I am seeing exactly the same issue with MPI_Allreduce, except in my case all MPI calls are done in single thread (but otherwise application is multi-threaded and uses jemalloc). I also was able to trace it down to single p2p call where correct data is sent and corrupted data is received.

It does repro with OpenIB and does not repro with TCP. It does not repro with -mca mpi_leave_pinned 0 or --without-memory-manager, as above.

Unfortunately, with -mca mpi_leave_pinned 0 performance with OpenIB is worse than with TCP, so it defeats the purpose of RDMA.

I also tried OpenMPI 1.10.7 and it works correctly. Performance is not as good as 2.1.1 with memory manager that gives corruption, but definitely better than 2.1.1 without pinned memory.

I have ring-reduce based implementation on pure MPI_Send()/MPI_Recv() that works well and does not lead to corruption, but I'm worried that it may do so in certain circumstances. Are there any pointers what's special about MPI_Allreduce() and why it corrupts the data?

alsrgv commented 7 years ago

UPDATE: I discovered that I can repro this issue with my ring-reduce implementation too if I modify it to use malloc'd buffer. Currently it's using pinned memory buffer and avoids corruption.

jsquyres commented 7 years ago

Per discussion on 1 Aug Webex:

@vspetrov Does this happen with v3.0.x? (we're assuming you didn't test v3.0.x -- can you clarify?)

@hjelmn Can you chime in here? It's apparently reproducible with openib, and a test case was attached in the original description.

hjelmn commented 7 years ago

Probably a race that isn't triggered by the slower case without mpi_leave_pinned. I can try to see if this happens with Aries later this week.

hppritcha commented 7 years ago

@vspetrov could you check the v3.0.x branch?

hjelmn commented 7 years ago

Nothing to do with the patcher. Looks like it might be a consistency problem with rcache/grdma. Investigating now.

hjelmn commented 7 years ago

Think I have a fix. Testing now. Want to make sure there are no regressions and that the problem isn't just being masked.

hjelmn commented 7 years ago

This bug is the result of a regression fix gone bad. I am working on a way to fix this without re-introducing the original bug. I am hoping to have a workaround today with the long term fix being a new registration cache.

Note, I changed the title to reflect that this is not a threading issue but a more general issue with how we ensure consistency in the registration cache.

hjelmn commented 7 years ago

@bosilca The regression fix that introduced this was for a memory hook deadlock you identified. Can you provide the reproducer for the deadlock?

bosilca commented 7 years ago

I first got it with MADNESS but then I could reproduce it with any C++ multi-threaded application.

hjelmn commented 7 years ago

@bosilca Ok, I think the attached app may be sufficient to trigger it as well. I am working with a patch that uses the deadlock "plan B" which uses a free list for the vma items. It seems to be working and is probably a good enough solution until I finish the new registration cache. I will post the PR shortly for testing.

hjelmn commented 7 years ago

Ok, fix is up in PR #4026. I have a simple reproducer that will go into MTT once this ticket is closed.

jsquyres commented 7 years ago

Per discussion on 8 Aug 2017 webex: @hjelmn explains that this happened because we didn't hook madvise() when we created patcher, solely because we didn't think we needed to. This issue and Nathan's further testing shows that we do need to hook madvise because there are cases where you can end up in a deadlock scenario (even in a single-threaded test).

The issue is that there is a race condition going on between simultaneously freeing and mallocing memory -- it's a classic hold-and-wait. @hjelmn will add a single-threaded test in ompi-tests that will always cause this scenario.

Background: the problem is in our VMA tree insert. We really should not be allocating or freeing memory inside the VMA allocation functions. However, we can't know the size we need to alloc until we get deep inside the VMA allocation functions, which led to the "we'll just alloc deep inside these routines" implementation. Somehow we need to fix this -- @hjelmn is looking into alternate data structures and/or algorithms. I.e., the real fix is to replace and/or redesign our VMA data structures.

Meaning: @hjelmn's PRs are band aids: they greatly reduce the possibility of the issue happening by increasing the default size of the reg cache to 2K items. This significantly decreases the probability that the reg cache will need to be expanded (which, in turn, can cause the problem described above -- where the VMA would need to allocate more items). Again, the real fix is to replace / redesign the VMA data structures. But this fix should hold us for a little while while @hjelmn works on the real fix.

Note that v2.x and v3.0.x PRs are already filed. v2.0.x will require a back port (because the rcache is still inside the mpool in 2.0.x -- the individual patches should apply, but files have moved between the v2.0.x and v2.x trees, and that requires human intervention -- @hjelmn is working on it).

jsquyres commented 7 years ago

@vspetrov @jladd-mlnx Can you test again on master / v2.x and see if you can reproduce the issue? According to @hjelmn, the band-aid fix should be good enough.

vspetrov commented 7 years ago

@jsquyres Sorry, for not replying so long. The github notifications were going to wrong mail. Anyways, i've tried the latest ompi/mater and it works. So, workaround does resolve (hide) the problem.

jsquyres commented 6 years ago

This has been fixed (worked around) in v2.x and v3.0.x and master. Waiting for #4078 for v2.0.x.

alsrgv commented 6 years ago

This patch fixes correctness issue I was observing in my distributed TensorFlow use case (https://github.com/uber/horovod) as well. Looking forward to the official release!

hppritcha commented 6 years ago

PRs to 2.0.x, 2.x and 3.0.x to fix this issue have been merged. Closing.