Closed fishtek closed 9 years ago
Thanks for the report. Here is what happens.
You are not waiting on promises returned by basic_publish. Unfortunately Puka has no way of understanding if you do or do not wish to wait for them and it keeps the state of every single promise, until it's been "waited" for.
There are two workarounds:
Majek, thank you!
I did as you suggested and placed a client.wait() on the promise returned by basic_publish right after that call. The memory is now staying steady and not creeping up!
I also thought that there would be a performance hit, since the publish is now fully synchronous? But it seems to run the same speed and the rabbit server reports over 1460 messages per second, so I think this solution will certainly do the trick!
thanks again
Hello,
An application I have written which uses Puka in the RPC pattern, and is experiencing a memory leak when many messages are passed through it.
I believe this is probably the same as Serious memory leak - if same channel used to consume & publish.
I was able to reproduce this problem With Puka 0.0.7 using a slightly modified version of the example code: rpc_client.py and rpc_server.py.
Here is the modified server:
Here is the modified client:
Not sure if it matters but my RabbitMQ version is: RabbitMQ 3.3.5, Erlang R16B03
The memory usage for both the client and server will continue to increase the longer the two programs are run.
Thanks