This should address #61 by essentially having close wait for pending flag to be cleared, signaling completion of the IOCP callback rather than checking whether the OVERLAPPED has completed, which does not ensure the callback has completed. Originally it erroneously checked whether the buffer pointer was NULL to cover this case, now it correctly checks the "pending" flag. This however exposed several other issues: a) open_ov was always pending after open, thus it could never close b) on mqtt disconnect recv always returns success, 0 bytes read (== remote side closed), pretty much continuously for which the xio_sk allocates then a 64 k buffer to read into (IOCP semantics on windows), which it tries to reallocate to 0, which fails, and thus during close memory consumption spiked. A follow up might be to see if we should limit the buffer sizes for xio_sk to e.g. 4k, but we should do some measurements as to max buffer for mqtt...
@marcschier,
Thanks for your contribution as a Microsoft full-time employee or intern. You do not need to sign a CLA.
Thanks,
Microsoft Pull Request Bot
This should address #61 by essentially having close wait for pending flag to be cleared, signaling completion of the IOCP callback rather than checking whether the OVERLAPPED has completed, which does not ensure the callback has completed. Originally it erroneously checked whether the buffer pointer was NULL to cover this case, now it correctly checks the "pending" flag. This however exposed several other issues: a) open_ov was always pending after open, thus it could never close b) on mqtt disconnect recv always returns success, 0 bytes read (== remote side closed), pretty much continuously for which the xio_sk allocates then a 64 k buffer to read into (IOCP semantics on windows), which it tries to reallocate to 0, which fails, and thus during close memory consumption spiked. A follow up might be to see if we should limit the buffer sizes for xio_sk to e.g. 4k, but we should do some measurements as to max buffer for mqtt...