nanomsg / nng

nanomsg-next-generation -- light-weight brokerless messaging
https://nng.nanomsg.org
MIT License
3.78k stars 485 forks source link

fixes #1827 Windows a deadlock on nng_close() #1828

Closed gdamore closed 4 months ago

gdamore commented 5 months ago

My current theory is that for some reason that I don't yet fully understand, we have code waiting in the condition that didn't set the closing. (Possibly the failure is a synchronization since s_closing is changed while not protected by the global lock.)

At any rate, the attempt to avoid the cost of a wake up here is silly, as pthread_cond_broadcast (and one assumes other variants like the Windows implementation to which I don't have source) are nearly free when there are no waiters. (Pthreads uses a relaxed order memory read to look for waiters, so no barrier is involved.)

So we can just do the wake unconditionally.

I'd appreciate it if folks who are encountering the problem can tell me if this change resolves for them.

codecov[bot] commented 5 months ago

Codecov Report

Attention: Patch coverage is 88.23529% with 2 lines in your changes are missing coverage. Please review.

Project coverage is 79.48%. Comparing base (e46b41a) to head (4c08f96).

Files Patch % Lines
src/sp/transport/tcp/tcp.c 50.00% 1 Missing :warning:
src/sp/transport/tls/tls.c 50.00% 1 Missing :warning:
Additional details and impacted files ```diff @@ Coverage Diff @@ ## master #1828 +/- ## ========================================== + Coverage 79.41% 79.48% +0.07% ========================================== Files 95 95 Lines 21487 21484 -3 ========================================== + Hits 17063 17076 +13 + Misses 4424 4408 -16 ```

:umbrella: View full report in Codecov by Sentry.
:loudspeaker: Have feedback on the report? Share it here.

alzix commented 5 months ago

the issue is still reproducible on this branch. image

gdamore commented 5 months ago

Please have another go with this branch -- another commit made which I hope will help.

gdamore commented 5 months ago

Well, that didn't work as well as hoped. Seems that the read/write cbs are also here.

gdamore commented 5 months ago

Ah reaping is needed because we are in the callback when we fail. And its interesting that this happens consistently for IPC, so that suggests that I'm on the right path.

gdamore commented 5 months ago

(Another go, restoring the reaping..)

alzix commented 5 months ago

i was not able to reproduce the original issue anymore, but I cannot get a decent number of iterations as the server is crashing on nni_list_node_remove as was previously reported image

image

alzix commented 5 months ago

win_ipcconn.c:229 in ipc_send_cb there is a check:

if ((aio = nni_list_first(&c->send_aios)) == NULL) {
    // Should indicate that it was closed.
    nni_mtx_unlock(&c->mtx);
    return;
}

I think it does not do what is expected as I see in the debugger that c->closed == true

alzix commented 5 months ago

there are two types of crashes here: one in ipc_send_cb and the other in ipc_recv_cb. Both occur on close. Based on my observation in these cases the aio object is malformed, and later leads to a crash. image ~either memory was not properly initialized or~ some other thread overwrote it.

from: https://en.wikipedia.org/wiki/Magic_number_(programming)#Debug_values

0xDDDDDDDD pattern is used by Microsoft's C/C++ debug free() function to mark freed heap memory

so it seems the aio contains dangling pointers...

alzix commented 5 months ago

from my observations, the problem occurs when ipc_recv_cb and/or ipc_send_cb are executed after nni_sock_shutdown

gdamore commented 5 months ago

@alzix thanks for the analysis. I will try to get to the bottom of this soon ... I've just been completely swamped with $dayjob.

gdamore commented 5 months ago

Definitely a use-after-free.

gdamore commented 4 months ago

This is very definitely windows specific. It may impact TCP as well, but the callback structure here is used with overlapped IO (a Windows thing.)

gdamore commented 4 months ago

So I guess the send_cb is somehow still running. I'm still trying to get to the bottom of this, because I would not expect that there are any posted I/Os at that point.

itayzafrir commented 4 months ago

added some info in PR https://github.com/nanomsg/nng/pull/1831#issuecomment-2124896736

alzix commented 4 months ago

So I guess the send_cb is somehow still running. I'm still trying to get to the bottom of this, because I would not expect that there are any posted I/Os at that point.

According to https://learn.microsoft.com/en-us/windows/win32/fileio/canceling-pending-i-o-operations

There is no guarantee that underlying drivers correctly support cancellation.

perhaps this is the case?

gdamore commented 4 months ago

So I guess the send_cb is somehow still running. I'm still trying to get to the bottom of this, because I would not expect that there are any posted I/Os at that point.

According to https://learn.microsoft.com/en-us/windows/win32/fileio/canceling-pending-i-o-operations

There is no guarantee that underlying drivers correctly support cancellation.

perhaps this is the case?

Then the driver should continue to completion which would be fine. But Windows named pipes and TCP both support cancellation. The problem is a defect in my logic, not missing Windows functionality. I'm still working to get to the bottom of it -- I thought I had understood it but clearly I was missing something.

gdamore commented 4 months ago

I've pushed another change... this fixes a bunch of problems.

The IPC pipe still has a use after free... I will fix it tomorrow. (I'm out of steam tonight.) Debugging this has been... challenging.

The TCP code seems rock solid (it had use-after-free bugs in it), and the listener code was brittle. It also includes the statistic crash.

itayzafrir commented 4 months ago

@gdamore thank you for looking into this and I hear you on the debugging challenge here :) Looking forward for your updates.

gdamore commented 4 months ago

Well, I think I've made some progress. It appears to be a very subtle data race in the aio framework. Essentially we can wind up modifying some pointers for the link list stuff, and those are done while not under the same lock that we used to test it, and that leads to a problem. I think a barrier is needed, because we cannot really share the lock that was used as the aio can move around.

It might be safer to add an atomic variable to the aio, but I'm loathe to do so for fear of impacting performance.

gdamore commented 4 months ago

I've added a bunch more asserts, and I can confirm that his problem only affects Windows. It affects Windows x86 and ARM both. I think that the problem is that my logic around removing the object from the I/O completion port isn't adequate. It seems that are getting completions for I/Os that we should not, and I can't seem to understand why this is happening.

Windows does not give an elegant way to just "detach" from the completion port, which means that there isn't a simple way to look to see if an operation is pending or not. Supposedly closing the handle should do it. But I'm still seeing some surprises.

gdamore commented 4 months ago

Well I might have to eat my words. After half an hour of running tests in a loop, now a similar crash happened on macOS.

gdamore commented 4 months ago

I have made some changes to try to simplify and unify the code. This seems to have greatly reduced the crash incidence, but I have not completely solved the problem. It seems like there may still be some race somewhere, and it does seem that the io completion ports are giving completions for objects that I believe to have been removed. It almost makes me believe that there are duplication completion packets being submitted but that seems nonsensical.

What's frustrating is that these problems seem to have only recently started happening -- older versions of NNG didn't suffer any of these problems. I'm going to run some tests -- because the other thing that has changed is ... well, Windows. So I wonder if some regression in Windows is in play here. (That's not where I go to first, but I'm really having a difficult time reasoning about the behavior that I'm observing.)

Adding complexity is that I'm running this in Windows in parallels on a mac m1. It seems to work well, mostly, but I could be suffering from being on the bleeding edge.

If anyone watching here can try an older version of Windows 10, or even Windows 8, that would be great. Also on real hardware.

gdamore commented 4 months ago

There is a distinct possibility that my local tests were impaired by ... "interesting" emulation. I'm not sure.