Open ghost opened 9 years ago
definitely seeing the same issue, but don't quite understand well enough to say why the leaked deliveries aren't being cleared.
could the autoAck feature be un-leaked by just having the api call .Ack(false) on itself until a proper fix is put in place?
@trist4n Been awhile since I had to deal with this; I remember trying a work-around but it did not work. If I remember correctly this was the only way I could solve the memory issue.
it has a bit of a performance hit (at least compared to doing nothing) in my case, but its probably better than the alternative.
anyway, on closer inspection it isn't really a memory leak per-se. when autoAck is set, there is no Qos() on the channel, so it consumes messages as fast as possible with no flow control.
the comment on channel.go: *Channel.Consume() says
Deliveries on the returned chan will be buffered indefinitely. To limit memory
of this buffer, use the Channel.Qos method to limit the amount of
unacknowledged/buffered deliveries the server will deliver on this Channel.
but the Qos() stuff does not function on channels with autoAck (as the documentation for Qos() says)
So I don't know what to do other than explicit acking. I do not understand how a client with autoAck is supposed to rate limit itself in general, nor in this library.
It is not a leak but the classic producer/consumer problem with unbounded buffers where a producer (in this case, data coming down the socket) outpaces consumer(s). Use manual acknowledgements if you experience this: that's why they exist.
In Java client there is a mechanism that protects against this via TCP back pressure but it is a non-trivial change to make. So I wouldn't assume all clients will try given that there is a protocol feature already for dealing with just this behaviour.
I was seeing this exact thing when using manual ACKs without QoS. Adding QoS with a prefetch of 1 worked, but seems like a hack. conn.close()
and ch.close
did not help, which I would expect to release these resources. I have an application which opens and closes AMQP connections at a high frequency, which resulted in 16GB memory usage before my server crashed.
Here's a screenshot after a few thousand requests:
Using manual acknowledgements with prefetch is not a hack: this problem is one of the key reasons why those features exist.
The prefetch buffer is leaked when the connection and channel are closed. This is a bug. Even this example has no QoS in it: https://github.com/streadway/amqp/blob/master/_examples/simple-consumer/consumer.go
Thanks for clarifying.
Just to make sure you don't chase your tail, the memory leak goes away when using QoS
of 1, but when not using QoS, closing the channel and connection do not release the implicit buffer.
@mattwilliamson we are trying to work out what "implicit buffer" means here. Can you please point us at a specific code line/field? Thank you.
This seems to be closely related to #264.
If you create a Consumer with auto-ack set to true, a memory leak will result.
Sample code:
If I populate a RabbitMQ server with 1M messages and try to process them this will result in about 1GB RSS from the program.
If I make the following changes only 12.5MB RSS is used to process 1M messages:
I used pprof tools and found the leak:
The leak ends up occurring at: