zeromq / libzmq

ZeroMQ core engine in C++, implements ZMTP/3.1
https://www.zeromq.org
Mozilla Public License 2.0
9.49k stars 2.34k forks source link

zmq:4.2.3 yqueue memory leak when subscriber restarts/crashes when its connected to a idle publisher #4286

Open muraliadiga opened 2 years ago

muraliadiga commented 2 years ago

Please use this template for reporting suspected bugs or requests for help.

Issue description

yqueue leak when subscriber restarts/crashes while publisher is alive. I think each yqueue size is around 16K. Whenever subscriber restarts while its its connected to a publisher, we see ~2*16K = 32KB memory leak at the publisher. I assume zmq created 2 yqueue per subscriber connection (rx/tx) We are destroying the zmq socket at the subscriber as part of cleanup while it restarts, and linger-period is set to 0 at the subscriber.

we are using zmq 4.2.3 Any known issue and got fixed in later releases?

Got below old issue. Looks like I am also hitting similar issue. But zmq_poll() returns "Not supported operation on the PUBLISHER" error. For some of our deployment scenarios publisher can be idle (not publishing anything) and if subscriber is crashing continously then it is resulting in continous leak in our publisher process and eventually results in out-of-memory assertion.

https://zeromq-dev.zeromq.narkive.com/OvHxrhPf/pub-sub-with-crashing-subscribers-possible-memory-leak#

Environment

Minimal test code / Steps to reproduce the issue

  1. restart the subscriber while publisher connected to it and its idle (not publishing any message). Publisher and subscriber are connected via TCP socket.

What's the actual result? (include assertion message & call stack if applicable)

zmq yqueue leak (2*16K = 32K) , hence memory leak is seen at the publisher.

What's the expected result?

No memory leak(zmq queue leak) should be seen at the publisher even though its idle (not publishing any messages)

muraliadiga commented 2 years ago

Any update on this issue? Can this be moved to assigned state?

Thanks Regards, Murali Adiga